Updates from: 09/30/2022 01:15:26
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
$replicaSetParams = @{
Location = $AzureLocation SubnetId = "/subscriptions/$AzureSubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Network/virtualNetworks/$VnetName/subnets/DomainServices" }
-$replicaSet = New-AzADDomainServiceReplicaSet @replicaSetParams
+$replicaSet = New-AzADDomainServiceReplicaSetObject @replicaSetParams
$domainServiceParams = @{ Name = $ManagedDomainName
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
To switch the directory in the Azure portal, click the user account name in the
![External users can switch directory.](media/concept-registration-mfa-sspr-combined/switch-directory.png)
+Or, you can specify a tenant by URL to access security information.
+
+`https://mysignins.microsoft.com/security-info?tenant=<Tenant Name>`
+
+`https://mysignins.microsoft.com/security-info/?tenantId=<Tenant ID>`
+ ## Next steps To get started, see the tutorials to [enable self-service password reset](tutorial-enable-sspr.md) and [enable Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
The two-gate policy requires two pieces of authentication data, such as an email
* Power BI service administrator * Privileged Authentication administrator * Privileged role administrator
- * SharePoint administrator
* Security administrator * Service support administrator
+ * SharePoint administrator
* Skype for Business administrator * User administrator
active-directory Concept Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-writeback.md
Password writeback provides the following features:
To get started with SSPR writeback, complete either one or both of the following tutorials: -- [Tutorial: Enable self-service password reset (SSPR) writeback](tutorial-enable-cloud-sync-sspr-writeback.md)
+- [Tutorial: Enable self-service password reset (SSPR) writeback](tutorial-enable-sspr-writeback.md)
- [Tutorial: Enable Azure Active Directory Connect cloud sync self-service password reset writeback to an on-premises environment (Preview)](tutorial-enable-cloud-sync-sspr-writeback.md) ## Azure AD Connect and cloud sync side-by-side deployment
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
For additional details see: [Understanding the certificate revocation process](.
[!INCLUDE [Set-AzureAD](../../../includes/active-directory-authentication-set-trusted-azuread.md)]
+## Step 2: Enable CBA on the tenant
-## Step 2: Configure authentication binding policy
+To enable the certificate-based authentication in the Azure Portal, complete the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) as an Authentication Policy Administrator.
+1. Select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
+1. Under **Manage**, select **Authentication methods** > **Certificate-based Authentication**.
+1. Under **Basics**, select **Yes** to enable CBA.
+1. CBA can be enabled for a targeted set of users.
+ 1. Click **All users** to enable all users.
+ 1. Click **Select users** to enable selected users or groups.
+ 1. Click **+ Add users**, select specific users and groups.
+ 1. Click **Select** to add them.
+
+ :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/enable.png" alt-text="Screenshot of how to enable CBA.":::
+
+Once certificate-based authentication is enabled on the tenant, all users in the tenant will see the option to sign in with a certificate. Only users who are enabled for certificate-based authentication will be able to authenticate using the X.509 certificate.
+
+>[!NOTE]
+>The network administrator should allow access to certauth endpoint for the customerΓÇÖs cloud environment in addition to login.microsoftonline.com. Disable TLS inspection on the certauth endpoint to make sure the client certificate request succeeds as part of the TLS handshake.
++
+## Step 3: Configure authentication binding policy
The authentication binding policy helps determine the strength of authentication to either a single factor or multi factor. An admin can change the default value from single-factor to multifactor and configure custom policy rules by mapping to issuer Subject or policy OID fields in the certificate.
To enable the certificate-based authentication and configure user bindings in th
1. Click **Ok** to save any custom rule.
-## Step 3: Configure username binding policy
+## Step 4: Configure username binding policy
The username binding policy helps determine the user in the tenant. By default, we map Principal Name in the certificate to onPremisesUserPrincipalName in the user object to determine the user.
The final configuration will look like this image:
:::image type="content" border="true" source="./media/how-to-certificate-based-authentication/final.png" alt-text="Screenshot of the final configuration.":::
-## Step 4: Enable CBA on the tenant
-
-To enable the certificate-based authentication in the Azure MyApps portal, complete the following steps:
-
-1. Sign in to the [MyApps portal](https://myapps.microsoft.com/) as an Authentication Policy Administrator.
-1. Select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
-1. Under **Manage**, select **Authentication methods** > **Certificate-based Authentication**.
-1. Under **Basics**, select **Yes** to enable CBA.
-1. CBA can be enabled for a targeted set of users.
- 1. Click **All users** to enable all users.
- 1. Click **Select users** to enable selected users or groups.
- 1. Click **+ Add users**, select specific users and groups.
- 1. Click **Select** to add them.
-
- :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/enable.png" alt-text="Screenshot of how to enable CBA.":::
-
-Once certificate-based authentication is enabled on the tenant, all users in the tenant will see the option to sign in with a certificate. Only users who are enabled for certificate-based authentication will be able to authenticate using the X.509 certificate.
-
->[!NOTE]
->The network administrator should allow access to certauth endpoint for the customerΓÇÖs cloud environment in addition to login.microsoftonline.com. Disable TLS inspection on the certauth endpoint to make sure the client certificate request succeeds as part of the TLS handshake.
- ## Step 5: Test your configuration This section covers how to test your certificate and custom authentication binding rules.
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md
Smart lockout tracks the last three bad password hashes to avoid incrementing th
> [!NOTE] > Hash tracking functionality isn't available for customers with pass-through authentication enabled as authentication happens on-premises not in the cloud.
-Federated deployments that use AD FS 2016 and AF FS 2019 can enable similar benefits using [AD FS Extranet Lockout and Extranet Smart Lockout](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection).
+Federated deployments that use AD FS 2016 and AD FS 2019 can enable similar benefits using [AD FS Extranet Lockout and Extranet Smart Lockout](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection).
Smart lockout is always on, for all Azure AD customers, with these default settings that offer the right mix of security and usability. Customization of the smart lockout settings, with values specific to your organization, requires Azure AD Premium P1 or higher licenses for your users.
active-directory Tutorial Enable Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
To set up the appropriate permissions for password writeback to occur, complete
[ ![Set the appropriate permissions in Active Users and Computers for the account that is used by Azure AD Connect](media/tutorial-enable-sspr-writeback/set-ad-ds-permissions-cropped.png) ](media/tutorial-enable-sspr-writeback/set-ad-ds-permissions.png#lightbox)
+1. When ready, select **Apply / OK** to apply the changes.
+1. From the **Permissions** tab, select **Add**.
+1. For **Principal**, select the account that permissions should be applied to (the account used by Azure AD Connect).
+1. In the **Applies to** drop-down list, select **This object and all descendant objects**
+1. Under *Permissions*, select the box for the following option:
+ * **Unexpire Password**
1. When ready, select **Apply / OK** to apply the changes and exit any open dialog boxes. When you update permissions, it might take up to an hour or more for these permissions to replicate to all the objects in your directory.
active-directory Tutorial Enable Sspr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr.md
To finish this tutorial, you need the following resources and privileges:
* A working Azure AD tenant with at least an Azure AD free or trial license enabled. In the Free tier, SSPR only works for cloud users in Azure AD. Password change is supported in the Free tier, but password reset is not. * For later tutorials in this series, you'll need an Azure AD Premium P1 or trial license for on-premises password writeback. * If needed, [create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An account with *Global Administrator* privileges.
+* An account with *Global Administrator* or *Authentication Policy Administrator* privileges.
* A non-administrator user with a password you know, like *testuser*. You'll test the end-user SSPR experience using this account in this tutorial. * If you need to create a user, see [Quickstart: Add new users to Azure Active Directory](../fundamentals/add-users-azure-active-directory.md). * A group that the non-administrator user is a member of, likes *SSPR-Test-Group*. You'll enable SSPR for this group in this tutorial.
Azure AD lets you enable SSPR for *None*, *Selected*, or *All* users. This granu
In this tutorial, set up SSPR for a set of users in a test group. Use the *SSPR-Test-Group* and provide your own Azure AD group as needed:
-1. Sign in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions.
+1. Sign in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* or *authentication policy administrator* permissions.
1. Search for and select **Azure Active Directory**, then select **Password reset** from the menu on the left side. 1. From the **Properties** page, under the option *Self service password reset enabled*, choose **Selected**. 1. If your group isn't visible, choose **No groups selected**, browse for and select your Azure AD group, like *SSPR-Test-Group*, and then choose *Select*.
active-directory Permissions Management Trial User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/permissions-management-trial-user-guide.md
+
+ Title: Trial User Guide - Microsoft Entra Permissions Management
+description: How to get started with your Entra Permissions free trial
++++++ Last updated : 09/01/2022+++
+# Trial user guide: Microsoft Entra Permissions Management
+
+Welcome to the Microsoft Entra Permissions Management trial user guide!
+
+This user guide is a simple guide to help you make the most of your free trial, including the Permissions Management Cloud Infrastructure Assessment to help you identify and remediate the most critical permission risks across your multicloud infrastructure. Using the suggested steps in this user guide from the Microsoft Identity team, you'll learn how Permissions Management can assist you to protect all your users and data.
+
+## What is Permissions Management?
+
+Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities including both workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions.
+
+Permissions Management helps your organization tackle cloud permissions by enabling the capabilities to continuously discover, remediate and monitor the activity of every unique user and workload identity operating in the cloud, alerting security and infrastructure teams to areas of unexpected or excessive risk.
+
+- Get granular cross-cloud visibility - Get a comprehensive view of every action performed by any identity on any resource.
+- Uncover permission risk - Assess permission risk by evaluating the gap between permissions granted and permissions used.
+- Enforce least privilege - Right-size permissions based on usage and activity and enforce permissions on-demand at cloud scale.
+- Monitor and detect anomalies - Detect anomalous permission usage and generate detailed forensic reports.
+
+![Diagram, schematic Description automatically generated](media/permissions-management-trial-user-guide/microsoft-entra-permissions-management-diagram.png)
++
+## Step 1: Set-up Permissions Management
+
+Before you enable Permissions Management in your organization:
+- You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
+- You must be eligible for or have an active assignment to the global administrator role as a user in that tenant.
+
+If the above points are met, continue with the following steps:
+
+1. [Enabling Permissions Management on your Azure AD tenant](../cloud-infrastructure-entitlement-management/onboard-enable-tenant.md#how-to-enable-permissions-management-on-your-azure-ad-tenant)
+2. Use the **Data Collectors** dashboard in Permissions Management to configure data collection settings for your authorization system. [Configure data collection settings](../cloud-infrastructure-entitlement-management/onboard-enable-tenant.md#configure-data-collection-settings).
+
+ Note that for each cloud platform, you will have 3 options for onboarding:
+
+ **Option 1 (Recommended): Automatically manage** ΓÇô this option allows subscriptions to be automatically detected and monitored without additional configuration.
+
+ **Option 2**: **Enter authorization systems** - you have the ability to specify only certain subscriptions to manage and monitor with MEPM (up to 10 per collector).
+
+ **Option 3**: **Select authorization systems** - this option detects all subscriptions that are accessible by the Cloud Infrastructure Entitlement Management application.
+
+ For information on how to onboard an AWS account, Azure subscription, or GCP project into Permissions Management, select one of the following articles and follow the instructions:
+ - [Onboard an AWS account](../cloud-infrastructure-entitlement-management/onboard-aws.md)
+ - [Onboard a Microsoft Azure subscription](../cloud-infrastructure-entitlement-management/onboard-azure.md)
+ - [Onboard a GCP project](../cloud-infrastructure-entitlement-management/onboard-gcp.md)
+3. [Enable or disable the controller after onboarding is complete](../cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md)
+4. [Add an account/subscription/project after onboarding is complete](../cloud-infrastructure-entitlement-management/onboard-add-account-after-onboarding.md)
+
+ **Actions to try:**
+
+ - [View roles/policies and requests for permission](../cloud-infrastructure-entitlement-management/ui-remediation.md#view-and-create-rolespolicies)
+ - [View information about roles/ policies](../cloud-infrastructure-entitlement-management/ui-remediation.md#view-and-create-rolespolicies)
+ - [View information about active and completed tasks](../cloud-infrastructure-entitlement-management/ui-tasks.md)
+ - [Create a role/policy](../cloud-infrastructure-entitlement-management/how-to-create-role-policy.md)
+ - [Clone a role/policy](../cloud-infrastructure-entitlement-management/how-to-clone-role-policy.md)
+ - [Modify a role/policy](../cloud-infrastructure-entitlement-management/how-to-modify-role-policy.md)
+ - [Delete a role/policy](../cloud-infrastructure-entitlement-management/how-to-delete-role-policy.md)
+ - [Attach and detach policies for Amazon Web Services (AWS) identities](../cloud-infrastructure-entitlement-management/how-to-attach-detach-permissions.md)
+ - [Add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities](../cloud-infrastructure-entitlement-management/how-to-add-remove-role-task.md)
+ - [Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities](../cloud-infrastructure-entitlement-management/how-to-revoke-task-readonly-status.md)
+ - [Create or approve a request for permissions](../cloud-infrastructure-entitlement-management/how-to-create-approve-privilege-request.md) Request permissions on-demand for one-time use or on a schedule. These permissions will automatically be revoked at the end of the requested period.
+
+## Step 2: Discover & assess
+
+Improve your security posture by getting comprehensive and granular visibility to enforce the principle of least privilege access across your entire multicloud environment. The Permissions Management dashboard gives you an overview of your permission profile and locates where the riskiest identities and resources are across your digital estate.
+
+The dashboard leverages the Permission Creep Index, which is a single and unified metric, ranging from 0 to 100, that calculates the gap between permissions granted and permissions used over a specific period. The higher the gap, the higher the index and the larger the potential attack surface. The Permission Creep Index only considers high-risk actions, meaning any action that can cause data leakage, service disruption degradation, or security posture change. Permissions Management creates unique activity profiles for each identity and resource which are used as a baseline to detect anomalous behaviors.
+
+1. [View risk metrics in your authorization system](../cloud-infrastructure-entitlement-management/ui-dashboard.md#view-metrics-related-to-avoidable-risk) in the Permissions Management Dashboard. This information is available for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
+ 1. View metrics related to avoidable risk - these metrics allow the Permission Management administrator to identify areas where they can reduce risks related to the principle of least permissions. Information includes [the Permissions Creep Index (PCI)](../cloud-infrastructure-entitlement-management/ui-dashboard.md#the-pci-heat-map) and [Analytics Dashboard](../cloud-infrastructure-entitlement-management/usage-analytics-home.md).
+
+
+ 1. Understand the [components of the Permissions Management Dashboard.](../cloud-infrastructure-entitlement-management/ui-dashboard.md#components-of-the-permissions-management-dashboard)
+
+2. View data about the activity in your authorization system
+
+ 1. [View user data on the PCI heat map](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-user-data-on-the-pci-heat-map).
+ > [!NOTE]
+ > The higher the PCI, the higher the risk.
+
+ 2. [View information about users, roles, resources, and PCI trends](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-information-about-users-roles-resources-and-pci-trends)
+ 3. [View identity findings](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-identity-findings)
+ 4. [View resource findings](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-resource-findings)
+3. [Configure your settings for data collection](../cloud-infrastructure-entitlement-management/product-data-sources.md) - use the **Data Collectors** dashboard in Permissions Management to view and configure settings for collecting data from your authorization systems.
+4. [View organizational and personal information](../cloud-infrastructure-entitlement-management/product-account-settings.md) - the **Account settings** dashboard in Permissions Management allows you to view personal information, passwords, and account preferences.
+5. [Select group-based permissions settings](../cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md)
+6. [View information about identities, resources and tasks](../cloud-infrastructure-entitlement-management/usage-analytics-home.md) - the **Analytics** dashboard displays detailed information about:
+ 1. **Users**: Tracks assigned permissions and usage by users. For more information, see View analytic information about users.
+ 2. **Groups**: Tracks assigned permissions and usage of the group and the group members. For more information, see View analytic information about groups
+ 3. **Active Resources**: Tracks resources that have been used in the last 90 days. For more information, see View analytic information about active resources
+ 4. **Active Tasks**: Tracks tasks that have been performed in the last 90 days. For more information, see View analytic information about active tasks
+ 5. **Access Keys**: Tracks the permission usage of access keys for a given user. For more information, see View analytic information about access keys
+ 6. **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions for AWS only. For more information, see View analytic information about serverless functions
+
+ System administrators can use this information to make decisions about granting permissions and reducing risk on unused permissions.
+
+## Step 3: Remediate & manage
+
+Right-size excessive and/or unused permissions in only a few clicks. Avoid any errors caused by manual processes and implement automatic remediation on all unused permissions for a predetermined set of identities and on a regular basis. You can also grant new permissions on-demand for just-in-time access to specific cloud resources.
+
+There are two facets to removing unused permissions: least privilege policy creation (remediation) and permissions-on-demand. With remediation, an administrator can create policies that remove unused permissions (also known as right-sizing permissions) to achieve least privilege across their multicloud environment.
+
+- [Manage roles/policies and permissions requests using the Remediation dashboard](../cloud-infrastructure-entitlement-management/ui-remediation.md).
+
+ The dashboard includes six subtabs:
+
+ - **Roles/Policies**: Use this subtab to perform Create Read Update Delete (CRUD) operations on roles/policies.
+ - **Role/Policy Name** ΓÇô Displays the name of the role or the AWS policy
+ - Note: An exclamation point (!) circled in red means the role or AWS policy has not been used.
+ - Role Type ΓÇô Displays the type of role or AWS policy
+ - **Permissions**: Use this subtab to perform Read Update Delete (RUD) on granted permissions.
+ - **Role/Policy Template**: Use this subtab to create a template for roles/policies template.
+ - **Requests**: Use this subtab to view approved, pending, and processed Permission on Demand (POD) requests.
+ - **My Requests**: Use this tab to manage lifecycle of the POD request either created by you or needs your approval.
+ - **Settings**: Use this subtab to select **Request Role/Policy Filters**, **Request Settings**, and **Auto-Approve** settings.
+
+**Best Practices for Remediation:**
+
+- **Creating activity-based roles/policies:** High-risk identities will be monitored and right-sized based on their historical activity. Unnecessary risk to leave unused high-risk permissions assigned to identities.
+- **Removing direct role assignments:** EPM will generate reports based on role assignments. In cases where high-risk roles are directly assigned, the Remediation permissions tab can query those identities and remove direct role assignments.
+- **Assigning read-only permissions:** Identities that are inactive or have high-risk permissions to production environments can be assigned read-only status. Access to production environments can be governed via Permissions On-demand.
+
+**Best Practices for Permissions On-demand:**
+
+- **Requesting Delete Permissions:** No user will have delete permissions unless they request them and are approved.
+- **Requesting Privileged Access:** High-privileged access is only granted through just-enough permissions and just-in-time access.
+- **Requesting Periodic Access:** Schedule reoccurring daily, weekly, or monthly permissions that are time-bound and revoked at the end of period.
+- Manage users, roles and their access levels with the User management dashboard.
+
+ **Actions to try:**
+
+ - [Manage users](../cloud-infrastructure-entitlement-management/ui-user-management.md#manage-users)
+ - [Manage groups](../cloud-infrastructure-entitlement-management/ui-user-management.md#manage-groups)
+ - [Select group-based permissions settings](../cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md)
+
+## Step 4: Monitor & alert
+
+Prevent data breaches caused by misuse and malicious exploitation of permissions with anomaly and outlier detection that alerts on any suspicious activity. Permissions Management continuously updates your Permission Creep Index and flags any incident, then immediately informs you with alerts via email. To further support rapid investigation and remediation, you can generate context-rich forensic reports around identities, actions, and resources.
+
+- Use queries to view information about user access with the **Audit** dashboard in Permissions Management. You can get an overview of queries a Permissions Management user has created to review how users access their authorization systems and accounts. The following options display at the top of the **Audit** dashboard:
+- A tab for each existing query. Select the tab to see details about the query.
+- **New Query**: Select the tab to create a new query.
+- **New tab (+)**: Select the tab to add a **New Query** tab.
+- **Saved Queries**: Select to view a list of saved queries.
+
+ **Actions to try:**
+
+ - [Use a query to view information](../cloud-infrastructure-entitlement-management/ui-audit-trail.md)
+ - [Create a custom query](../cloud-infrastructure-entitlement-management/how-to-create-custom-queries.md)
+ - [Generate an on-demand report from a query](../cloud-infrastructure-entitlement-management/how-to-audit-trail-results.md)
+ - [Filter and query user activity](../cloud-infrastructure-entitlement-management/product-audit-trail.md)
+
+Use the **Activity triggers** dashboard to view information and set alerts and triggers.
+
+- Set activity alerts and triggers
+
+ Our customizable machine learning-powered anomaly and outlier detection alerts will notify you of any suspicious activity such as deviations in usage profiles or abnormal access times. Alerts can be used to alert on permissions usage, access to resources, indicators of compromise, insider threats, or to track previous incidents.
+
+ **Actions to try**
+
+ - [View information about alerts and alert triggers](../cloud-infrastructure-entitlement-management/ui-triggers.md)
+ - [Create and view activity alerts and alert triggers](../cloud-infrastructure-entitlement-management/how-to-create-alert-trigger.md)
+ - [Create and view rule-based anomaly alerts and anomaly triggers](../cloud-infrastructure-entitlement-management/product-rule-based-anomalies.md)
+ - [Create and view statistical anomalies and anomaly triggers](../cloud-infrastructure-entitlement-management/product-statistical-anomalies.md)
+ - [Create and view permission analytics triggers](../cloud-infrastructure-entitlement-management/product-permission-analytics.md)
+
+**Best Practices for Custom Alerts:**
+
+- Permission assignments done outside of approved administrators
+ - Examples:
+
+ Example: Any activity done by root:
+
+ ![Diagram, Any activity done by root user in AWS.](media/permissions-management-trial-user-guide/custom-alerts-1.png)
+
+ Alert for monitoring any direct Azure role assignment
+
+ ![Diagram, Alert for monitoring any direct Azure role assignment done by anyone other than Admin user.](media/permissions-management-trial-user-guide/custom-alerts-2.png)
+
+- Access to critical sensitive resources
+
+ Example: Alert for monitoring any action on Azure resources
+
+ ![Diagram, Alert for monitoring any action on Azure resources.](media/permissions-management-trial-user-guide/custom-alerts-3.png)
+
+- Use of break glass accounts like root in AWS, global admin in Azure AD accessing subscriptions, etc.
+
+ Example: BreakGlass users should be used for emergency access only.
+
+ ![Diagram, Example of break glass account users used for emergency access only.](media/permissions-management-trial-user-guide/custom-alerts-4.png)
+
+- Create and view reports
+
+ To support rapid remediation, you can set up security reports to be delivered at custom intervals. Permissions Management has various types of system report types available that capture specific sets of data by cloud infrastructure (AWS, Azure, GCP), by account/subscription/project, and more. Reports are fully customizable and can be delivered via email at pre-configured intervals.
+
+ These reports enable you to:
+
+ - Make timely decisions.
+ - Analyze trends and system/user performance.
+ - Identify trends in data and high-risk areas so that management can address issues more quickly and improve their efficiency.
+ - Automate data analytics in an actionable way.
+ - Ensure compliance with audit requirements for periodic reviews of **who has access to what,**
+ - Look at views into **Separation of Duties** for security hygiene to determine who has admin permissions.
+ - See data for **identity governance** to ensure inactive users are decommissioned because they left the company or to remove vendor accounts that have been left behind, old consultant accounts, or users who as parts of the Joiner/Mover/Leaver process have moved onto another role and are no longer using their access. Consider this a fail-safe to ensure dormant accounts are removed.
+ - Identify over-permissioned access to later use the Remediation to pursue **Zero Trust and least privileges.**
+
+ **Example of** [**Permissions Management Report**](https://microsoft.sharepoint.com/:v:/t/MicrosoftEntraPermissionsManagementAssets/EQWmUsMsdkZEnFVv-M9ZoagBd4B6JUQ2o7zRTupYrfxbGA)
+
+ **Actions to try**
+ - [View system reports in the Reports dashboard](../cloud-infrastructure-entitlement-management/product-reports.md)
+ - [View a list and description of system reports](../cloud-infrastructure-entitlement-management/all-reports.md)
+ - [Generate and view a system report](../cloud-infrastructure-entitlement-management/report-view-system-report.md)
+ - [Create, view, and share a custom report](../cloud-infrastructure-entitlement-management/report-create-custom-report.md)
+ - [Generate and download the Permissions analytics report](../cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md)
+
+**Key Reports to Monitor:**
+
+- **Permissions Analytics Report:** lists the key permission risks including Super identities, Inactive identities, Over-provisioned active identities, and more
+- **Group entitlements and Usage reports:** Provides guidance on cleaning up directly assigned permissions
+- **Access Key Entitlements and Usage reports**: Identifies high risk service principals with old secrets that havenΓÇÖt been rotated every 90 days (best practice) or decommissioned due to lack of use (as recommended by the Cloud Security Alliance).
+
+## Next steps
+
+For more information about Permissions Management, see:
+
+**Microsoft Learn**: [Permissions management](../cloud-infrastructure-entitlement-management/index.yml).
+
+**Datasheet:** <https://aka.ms/PermissionsManagementDataSheet>
+
+**Solution Brief:** <https://aka.ms/PermissionsManagementSolutionBrief>
+
+**White Paper:** <https://aka.ms/CIEMWhitePaper>
+
+**Infographic:** <https://aka.ms/PermissionRisksInfographic>
+
+**Security paper:** [2021 State of Cloud Permissions Risks](https://scistorageprod.azureedge.net/assets/2021%20State%20of%20Cloud%20Permission%20Risks.pdf?sv=2019-07-07&sr=b&sig=Sb17HibpUtJm2hYlp6GYlNngGiSY5GcIs8IfpKbRlWk%3D&se=2022-05-27T20%3A37%3A22Z&sp=r)
+
+**Permissions Management Glossary:** <https://aka.ms/PermissionsManagementGlossary>
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/apple-sso-plugin.md
Common settings:
- `https://sts.windows.net` - `https://login.partner.microsoftonline.cn` - `https://login.chinacloudapi.cn`
- - `https://login.microsoftonline.de`
- `https://login.microsoftonline.us` - `https://login.usgovcloudapi.net` - `https://login-us.microsoftonline.com`
active-directory Multi Service Web App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-user.md
public class IndexModel : PageModel
# [Node.js](#tab/programming-language-nodejs)
-The web app gets the user's access token from the incoming requests header, which is then passed down to Microsoft Graph client to make an authenticated request to the `/me` endpoint.
+Using the [microsoft-identity-express](https://github.com/Azure-Samples/microsoft-identity-express) package, the web app gets the user's access token from the incoming requests header. microsoft-identity-express detects that the web app is hosted on App Service and gets the access token from the App Service authentication/authorization module. The access token is then passed down to the Microsoft Graph SDK client to make an authenticated request to the `/me` endpoint.
To see this code as part of a sample application, see *graphController.js* in the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/tree/main/2-WebApp-graphapi-on-behalf).
+> [!NOTE]
+> The microsoft-identity-express package isn't required in your web app for basic authentication/authorization or to authenticate requests with Microsoft Graph. It's possible to [securely call downstream APIs](../../app-service/tutorial-auth-aad.md#call-api-securely-from-server-code) with only the App Service authentication/authorization module enabled.
+>
+> However, the App Service authentication/authorization is designed for more basic authentication scenarios. Later, when your web app needs to handle more complex scenarios, you can disable the App Service authentication/authorization module and microsoft-identity-express will already be a part of your app.
+ ```nodejs const graphHelper = require('../utils/graphHelper');
If you're finished with this tutorial and no longer need the web app or associat
## Next steps > [!div class="nextstepaction"]
-> [App service accesses Microsoft Graph as the app](multi-service-web-app-access-microsoft-graph-as-app.md)
+> [App service accesses Microsoft Graph as the app](multi-service-web-app-access-microsoft-graph-as-app.md)
active-directory Azure Ad Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/azure-ad-account.md
Azure AD account is an identity provider option for your self-service sign-up us
![Azure AD account in a self-service sign-up user flow](media/azure-ad-account/azure-ad-account-user-flow.png) ## Verifying the application's publisher domain
-As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../develop/publisher-verification-overview.md) about this change.) Note that for Azure AD user flows, the publisherΓÇÖs domain appears only when using a [Microsoft account](microsoft-account.md) or other Azure AD tenant as the identity provider. To meet these new requirements, do the following:
+As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md), ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../develop/publisher-verification-overview.md) about this change.) For Azure AD user flows, the publisherΓÇÖs domain appears only when using a [Microsoft account](microsoft-account.md) or other Azure AD tenant as the identity provider. To meet these new requirements, follow these steps:
1. [Verify your company identity using your Microsoft Partner Network (MPN) account](/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact. 1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options:
As of November 2020, new application registrations show up as unverified in the
## Next steps - [Add Azure Active Directory B2B collaboration users](add-users-administrator.md)-- [Add self-service sign-up to an app](self-service-sign-up-user-flow.md)
+- [Add self-service sign-up to an app](self-service-sign-up-user-flow.md)
active-directory Automate Provisioning To Applications Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/automate-provisioning-to-applications-introduction.md
Thousands of organizations are running Azure AD cloud-hosted services, with its
| What | From | To | Read | | - | - | - | - | | Employees and contractors| HR systems| AD and Azure AD| [Connect identities with your system of record](automate-provisioning-to-applications-solutions.md) |
-| Existing AD users and groups| AD| Azure AD| [Synchronize identities between Azure AD and Active Directory](automate-provisioning-to-applications-solutions.md) |
+| Existing AD users and groups| AD DS| Azure AD| [Synchronize identities between Azure AD and Active Directory](automate-provisioning-to-applications-solutions.md) |
| Users, groups| Azure AD| SaaS and on-prem apps| [Automate provisioning to non-Microsoft applications](../governance/entitlement-management-organization.md) | | Access rights| Azure AD Identity Governance| SaaS and on-prem apps| [Entitlement management](../governance/entitlement-management-overview.md) | | Existing users and groups| AD, SaaS and on-prem apps| Identity governance (so I can review them)| [Azure AD Access reviews](../governance/access-reviews-overview.md) |
active-directory Automate Provisioning To Applications Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/automate-provisioning-to-applications-solutions.md
Previously updated : 09/23/2022 Last updated : 09/29/2022 - it-pro
The Azure AD provisioning service enables organizations to [bring identities fro
### On-premises HR + joining multiple data sources
-To create a full user profile for an employee identity, organizations often merge information from multiple HR systems, databases, and other user data stores. MIM provides a rich set of [connectors](https://learn.microsoft.com/microsoft-identity-manager/supported-management-agents) and integration solutions interoperating with heterogeneous platforms.
+To create a full user profile for an employee identity, organizations often merge information from multiple HR systems, databases, and other user data stores. MIM provides a rich set of [connectors](https://learn.microsoft.com/microsoft-identity-manager/supported-management-agents) and integration solutions interoperating with heterogeneous platforms both on-premises and in the cloud.
MIM offers [rule extension](/previous-versions/windows/desktop/forefront-2010/ms698810(v=vs.100)?redirectedfrom=MSDN) and [workflow capabilities](https://microsoft.github.io/MIMWAL/) features for advanced scenarios requiring data transformation and consolidation from multiple sources. These connectors, rule extensions, and workflow capabilities enable organizations to aggregate user data in the MIM metaverse to form a single identity for each user. The identity can be [provisioned into downstream systems](/microsoft-identity-manager/microsoft-identity-manager-2016-supported-platforms) such as AD DS.
The scenarios are divided by the direction of synchronization needed, and are li
Use the numbered sections in the next two section to cross reference the following table.
-**Synchronize identities from AD into Azure AD**
+**Synchronize identities from AD DS into Azure AD**
1. For users in AD that need access to Office 365 or other applications that are connected to Azure AD, Azure AD Connect cloud sync is the first solution to explore. It provides a lightweight solution to create users in Azure AD, manage password rests, and synchronize groups. Configuration and management are primarily done in the cloud, minimizing your on-premises footprint. It provides high-availability and automatic failover, ensuring password resets and synchronization continue, even if there's an issue with on-premises servers. 1. For complex, large-scale AD to Azure AD sync needs such as synchronizing groups over 50,000 and device sync, customers can use Azure AD Connect sync to meet their needs.
-**Synchronize identities from Azure AD into AD**
+**Synchronize identities from Azure AD into AD DS**
As customers transition identity management to the cloud, more users and groups are created directly in Azure AD. However, they still need a presence on-premises in AD DS to access various resources.
-3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to on-premises Windows-Integrated Authentication or Kerberos-based applications.
+3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD DS](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to [on-premises Windows-Integrated Authentication or Kerberos-based applications](https://learn.microsoft.com/azure/active-directory/external-identities/hybrid-cloud-to-on-premises). Alternatively, customers can user [PowerShell scripts](https://github.com/Azure-Samples/B2B-to-AD-Sync) to automate the creation of guest accounts on-premises.
1. When a group is created in Azure AD, it can be automatically synchronized to AD DS using [Azure AD Connect sync](../hybrid/how-to-connect-group-writeback-v2.md).
As customers transition identity management to the cloud, more users and groups
| 1 |Users, groups| AD DS| Azure AD| [Azure AD Connect Cloud Sync](https://learn.microsoft.com/azure/active-directory/cloud-sync/what-is-cloud-sync) | | 2 |Users, groups, devices| AD DS| Azure AD| [Azure AD Connect Sync](https://learn.microsoft.com/azure/active-directory/hybrid/whatis-azure-ad-connect) | | 3 |Groups| Azure AD| AD DS| [Azure AD Connect Sync](../hybrid/how-to-connect-group-writeback-v2.md) |
-| 4 |Guest accounts| Azure AD| AD DS| [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) |
+| 4 |Guest accounts| Azure AD| AD DS| [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario), [PowerShell](https://github.com/Azure-Samples/B2B-to-AD-Sync)|
| 5 |Users, groups| Azure AD| Managed AD| [Azure AD Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | The table depicts common scenarios and the recommended technology.
After users are provisioned into Azure AD, use Lifecycle Workflows (LCW) to auto
### Reconcile changes made directly in the target system
-Organizations often need a complete audit trail of what users have access to applications containing data subject to regulation. To provide an audit trail, any access provided to a user directly must be traceable through the system of record. MIM provides the [reconciliation capabilities](/microsoft-identity-manager/mim-how-provision-users-adds) to detect changes made directly in a target system and roll back the changes. In addition to detecting changes in target applications, MIM can import identities from third party applications to Azure AD. These applications often augment the set of user records that originated in the HR system.
+Organizations often need a complete audit trail of what users have access to applications containing data subject to regulation. To provide an audit trail, any access provided to a user directly must be traceable through the system of record. MIM provides the reconciliation capabilities to detect changes made directly in a target system and roll back the changes. In addition to detecting changes in target applications, MIM can import identities from third party applications to Azure AD. These applications often augment the set of user records that originated in the HR system.
### Next steps
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-applications.md
Monitor changes to application configuration. Specifically, configuration change
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Dangling URI| High| Azure AD Logs and Application Registration| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| For example, look for dangling URIs that point to a domain name that no longer exists or one that you donΓÇÖt explicitly own.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/URLAddedtoApplicationfromUnknownDomain.yaml)<br><br>[Link to Sigma repo](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Dangling URI| High| Azure AD Logs and Application Registration| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| For example, look for dangling URIs that point to a domain name that no longer exists or one that you donΓÇÖt explicitly own.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/URLAddedtoApplicationfromUnknownDomain.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Redirect URI configuration changes| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are NOT unique to the application, URIs that point to a domain you don't control.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ApplicationRedirectURLUpdate.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | Alert when these changes are detected.
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
Lifecycle Workflows come with many pre-configured tasks that are designed to aut
Lifecycle Workflow's built-in tasks each include an identifier, known as **taskDefinitionID**, and can be used to create either new workflows from scratch, or inserted into workflow templates so that they fit the needs of your organization. For more information on templates available for use with Lifecycle Workflows, see: [Lifecycle Workflow Templates](lifecycle-workflow-templates.md).
-Lifecycle Workflows currently support the following tasks:
-|Task |taskDefinitionID |
-|||
-|[Send welcome email to new hire](lifecycle-workflow-tasks.md#send-welcome-email-to-new-hire) | 70b29d51-b59a-4773-9280-8841dfd3f2ea |
-|[Generate Temporary Access Pass and send via email to user's manager](lifecycle-workflow-tasks.md#generate-temporary-access-pass-and-send-via-email-to-users-manager) | 1b555e50-7f65-41d5-b514-5894a026d10d |
-|[Add user to groups](lifecycle-workflow-tasks.md#add-user-to-groups) | 22085229-5809-45e8-97fd-270d28d66910 |
-|[Add user to teams](lifecycle-workflow-tasks.md#add-user-to-teams) | e440ed8d-25a1-4618-84ce-091ed5be5594 |
-|[Enable user account](lifecycle-workflow-tasks.md#enable-user-account) | 6fc52c9d-398b-4305-9763-15f42c1676fc |
-|[Run a custom task extension](lifecycle-workflow-tasks.md#run-a-custom-task-extension) | 4262b724-8dba-4fad-afc3-43fcbb497a0e |
-|[Disable user account](lifecycle-workflow-tasks.md#disable-user-account) | 1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950 |
-|[Remove user from selected group](lifecycle-workflow-tasks.md#remove-user-from-selected-groups) | 1953a66c-751c-45e5-8bfe-01462c70da3c |
-|[Remove users from all groups](lifecycle-workflow-tasks.md#remove-users-from-all-groups) | b3a31406-2a15-4c9a-b25b-a658fa5f07fc |
-|[Remove user from teams](lifecycle-workflow-tasks.md#remove-user-from-teams) | 06aa7acb-01af-4824-8899-b14e5ed788d6 |
-|[Remove user from all teams](lifecycle-workflow-tasks.md#remove-users-from-all-teams) | 81f7b200-2816-4b3b-8c5d-dc556f07b024 |
-|[Remove all license assignments from user](lifecycle-workflow-tasks.md#remove-all-license-assignments-from-user) | 8fa97d28-3e52-4985-b3a9-a1126f9b8b4e |
-|[Delete user](lifecycle-workflow-tasks.md#delete-user) | 8d18588d-9ad3-4c0f-99d0-ec215f0e3dff |
-|[Send email to manager before user last day](lifecycle-workflow-tasks.md#send-email-to-manager-before-user-last-day) | 52853a3e-f4e5-4eb8-bb24-1ac09a1da935 |
-|[Send email on users last day](lifecycle-workflow-tasks.md#send-email-on-users-last-day) | 9c0a1eaf-5bda-4392-9d9e-6e155bb57411 |
-|[Send offboarding email to users manager after their last day](lifecycle-workflow-tasks.md#send-offboarding-email-to-users-manager-after-their-last-day) | 6f22ddd4-b3a5-47a4-a846-0d7c201a49ce |
+Lifecycle Workflows currently support the following tasks:
+
+|Task |taskdefinitionID |Category |
+||||
+|[Send welcome email to new hire](lifecycle-workflow-tasks.md#send-welcome-email-to-new-hire) | 70b29d51-b59a-4773-9280-8841dfd3f2ea | Joiner |
+|[Generate Temporary Access Pass and send via email to user's manager](lifecycle-workflow-tasks.md#generate-temporary-access-pass-and-send-via-email-to-users-manager) | 1b555e50-7f65-41d5-b514-5894a026d10d | Joiner |
+|[Add user to groups](lifecycle-workflow-tasks.md#add-user-to-groups) | 22085229-5809-45e8-97fd-270d28d66910 | Joiner, Leaver
+|[Add user to teams](lifecycle-workflow-tasks.md#add-user-to-teams) | e440ed8d-25a1-4618-84ce-091ed5be5594 | Joiner, Leaver
+|[Enable user account](lifecycle-workflow-tasks.md#enable-user-account) | 6fc52c9d-398b-4305-9763-15f42c1676fc | Joiner, Leaver
+|[Run a custom task extension](lifecycle-workflow-tasks.md#run-a-custom-task-extension) | 4262b724-8dba-4fad-afc3-43fcbb497a0e | Joiner, Leaver
+|[Disable user account](lifecycle-workflow-tasks.md#disable-user-account) | 1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950 | Leaver
+|[Remove user from selected group](lifecycle-workflow-tasks.md#remove-user-from-selected-groups) | 1953a66c-751c-45e5-8bfe-01462c70da3c | Leaver
+|[Remove users from all groups](lifecycle-workflow-tasks.md#remove-users-from-all-groups) | b3a31406-2a15-4c9a-b25b-a658fa5f07fc | Leaver
+|[Remove user from teams](lifecycle-workflow-tasks.md#remove-user-from-teams) | 06aa7acb-01af-4824-8899-b14e5ed788d6 | Leaver |
+|[Remove user from all teams](lifecycle-workflow-tasks.md#remove-users-from-all-teams) | 81f7b200-2816-4b3b-8c5d-dc556f07b024 | Leaver |
+|[Remove all license assignments from user](lifecycle-workflow-tasks.md#remove-all-license-assignments-from-user) | 8fa97d28-3e52-4985-b3a9-a1126f9b8b4e | Leaver
+|[Delete user](lifecycle-workflow-tasks.md#delete-user) | 8d18588d-9ad3-4c0f-99d0-ec215f0e3dff | Leaver |
+|[Send email to manager before user last day](lifecycle-workflow-tasks.md#send-email-to-manager-before-user-last-day) | 52853a3e-f4e5-4eb8-bb24-1ac09a1da935 | Leaver |
+|[Send email on users last day](lifecycle-workflow-tasks.md#send-email-on-users-last-day) | 9c0a1eaf-5bda-4392-9d9e-6e155bb57411 | Leaver |
+|[Send offboarding email to users manager after their last day](lifecycle-workflow-tasks.md#send-offboarding-email-to-users-manager-after-their-last-day) | 6f22ddd4-b3a5-47a4-a846-0d7c201a49ce | Leaver |
## Common task parameters (preview)
active-directory Set Employee Leave Date Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/set-employee-leave-date-time.md
In delegated scenarios, the signed-in user needs the Global Administrator role t
Updating the employeeLeaveDateTime requires the User-LifeCycleInfo.ReadWrite.All application permission.
->[!NOTE]
-> The User-LifeCycleInfo.ReadWrite.All permissions is currently hidden and cannot be configured in Graph Explorer or the API permission blade of app registrations.
- ## Set employeeLeaveDateTime via PowerShell To set the employeeLeaveDateTime for a user using PowerShell enter the following information:
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
The following reference document provides an overview of a workflow created usin
|Parameter |Display String |Description |Admin Consent Required | |||||
-|LifecycleWorkflows.Read.All | Read all Lifecycle workflows, tasks, user states| Allows the app to list and read all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes
-|LifecycleWorkflows.ReadWrite.All | Read and write all lifecycle workflows, tasks, user states.| Allows the app to create, update, list, read and delete all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes
+|LifecycleWorkflows.Read.All | Read all lifecycle workflows and tasks.| Allows the app to list and read all workflows and tasks related to lifecycle workflows on behalf of the signed-in user.| Yes
+|LifecycleWorkflows.ReadWrite.All | Read and write all lifecycle workflows and tasks.| Allows the app to create, update, list, read and delete all workflows and tasks related to lifecycle workflows on behalf of the signed-in user.| Yes
## Parts of a workflow A workflow can be broken down in to the following three main parts.
active-directory Add Application Portal Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
Title: 'Quickstart: Enable single sign-on for an enterprise application'
+ Title: Enable single sign-on for an enterprise application
description: Enable single sign-on for an enterprise application in Azure Active Directory. -+ -+ Previously updated : 09/21/2021- Last updated : 09/29/2022+ #Customer intent: As an administrator of an Azure AD tenant, I want to enable single sign-on for an enterprise application.
-# Quickstart: Enable single sign-on for an enterprise application
+# Enable single sign-on for an enterprise application
-In this quickstart, you use the Azure Active Directory Admin Center to enable single sign-on (SSO) for an enterprise application that you added to your Azure Active Directory (Azure AD) tenant. After you configure SSO, your users can sign in by using their Azure AD credentials.
+In this article, you use the Azure Active Directory Admin Center to enable single sign-on (SSO) for an enterprise application that you added to your Azure Active Directory (Azure AD) tenant. After you configure SSO, your users can sign in by using their Azure AD credentials.
-Azure AD has a gallery that contains thousands of pre-integrated applications that use SSO. This quickstart uses an enterprise application named **Azure AD SAML Toolkit** as an example, but the concepts apply for most pre-configured enterprise applications in the gallery.
+Azure AD has a gallery that contains thousands of pre-integrated applications that use SSO. This article uses an enterprise application named **Azure AD SAML Toolkit 1** as an example, but the concepts apply for most pre-configured enterprise applications in the gallery.
-It is recommended that you use a non-production environment to test the steps in this quickstart.
+It is recommended that you use a non-production environment to test the steps in this article.
## Prerequisites
To enable SSO for an application:
1. In the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to use. For example, **Azure AD SAML Toolkit 1**. 1. In the **Manage** section of the left menu, select **Single sign-on** to open the **Single sign-on** pane for editing. 1. Select **SAML** to open the SSO configuration page. After the application is configured, users can sign in to it by using their credentials from the Azure AD tenant.
-1. The process of configuring an application to use Azure AD for SAML-based SSO varies depending on the application. For any of the enterprise applications in the gallery, use the link to find information about the steps needed to configure the application. The steps for the **Azure AD SAML Toolkit** are listed in this quickstart.
+1. The process of configuring an application to use Azure AD for SAML-based SSO varies depending on the application. For any of the enterprise applications in the gallery, use the **configuration guide** link to find information about the steps needed to configure the application. The steps for the **Azure AD SAML Toolkit 1** are listed in this article.
:::image type="content" source="media/add-application-portal-setup-sso/saml-configuration.png" alt-text="Configure single sign-on for an enterprise application.":::
To configure SSO in Azure AD:
1. For **Reply URL (Assertion Consumer Service URL)**, enter `https://samltoolkit.azurewebsites.net/SAML/Consume`. 1. For **Sign on URL**, enter `https://samltoolkit.azurewebsites.net/`. 1. Select **Save**.
-1. In the **SAML Signing Certificate** section, select **Download** for **Certificate (Raw)** to download the SAML signing certificate and save it to be used later.
+1. In the **SAML Certificates** section, select **Download** for **Certificate (Raw)** to download the SAML signing certificate and save it to be used later.
## Configure single sign-on in the application
To register a user account with the application:
:::image type="content" source="media/add-application-portal-setup-sso/toolkit-register.png" alt-text="Register a user account in the Azure AD SAML Toolkit application.":::
-1. For **Email**, enter the email address of the user that will access the application. For example, in a previous quickstart, the user account was created that uses the address of `contosouser1@contoso.com`. Be sure to change `contoso.com` to the domain of your tenant.
+1. For **Email**, enter the email address of the user that will access the application. Ensure that the user account is already assigned to the application.
1. Enter a **Password** and confirm it. 1. Select **Register**. ### Configure SAML settings
-To configure SAML setting for the application:
+To configure SAML settings for the application:
-1. Signed in with the credentials of the user account that you created, select **SAML Configuration** at the upper-left corner of the page.
+1. Signed in with the credentials of the user account that you already assigned to the application, select **SAML Configuration** at the upper-left corner of the page.
1. Select **Create** in the middle of the page. 1. For **Login URL**, **Azure AD Identifier**, and **Logout URL**, enter the values that you recorded earlier. 1. Select **Choose file** to upload the certificate that you previously downloaded.
You can test the single sign-on configuration from the **Set up single sign-on**
To test SSO:
-1. In the **Test single sign-on with Azure AD SAML Toolkit 1** section, on the **Set up single sign-on** pane, select **Test**.
+1. In the **Test single sign-on with Azure AD SAML Toolkit 1** section, on the **Set up single sign-on with SAML** pane, select **Test**.
1. Sign in to the application using the Azure AD credentials of the user account that you assigned to the application.
-## Clean up resources
-
-If you are planning to complete the next quickstart, keep the enterprise application that you created. Otherwise, you can consider deleting it to clean up your tenant.
## Next steps
-Learn how to configure the properties of an enterprise application.
-> [!div class="nextstepaction"]
-> [Configure an application](add-application-portal-configure.md)
+- [Manage self service access](manage-self-service-access.md)
+- [Configure user consent](configure-user-consent.md)
+- [Grant tenant-wide admin consent](grant-admin-consent.md)
active-directory Plan Sso Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/plan-sso-deployment.md
The following SSO protocols are available to use:
## Next steps -- Consider completing the single sign-on training in [Enable single sign-on for applications by using Azure Active Directory](/training/modules/enable-single-sign-on).
+- [Enable single sign-on for applications by using Azure Active Directory](add-application-portal-setup-sso.md).
active-directory What Is Application Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-application-management.md
To [manage access](what-is-access-management.md) for an application, you want to
You can [manage user consent settings](configure-user-consent.md) to choose whether users can allow an application or service to access user profiles and organizational data. When applications are granted access, users can sign in to applications integrated with Azure AD, and the application can access your organization's data to deliver rich data-driven experiences.
-Users often are unable to consent to the permissions an application is requesting. Configure the admin consent workflow to allow users to provide a justification and request an administrator's review and approval of an application. For training on how to configure admin consent workflow in your Azure AD tenant, see [Configure admin consent workflow](/training/modules/configure-admin-consent-workflow).
+Users often are unable to consent to the permissions an application is requesting. Configure the admin consent workflow to allow users to provide a justification and request an administrator's review and approval of an application. To learn how to configure admin consent workflow in your Azure AD tenant, see [Configure admin consent workflow](configure-admin-consent-workflow.md).
As an administrator, you can [grant tenant-wide admin consent](grant-admin-consent.md) to an application. Tenant-wide admin consent is necessary when an application requires permissions that regular users aren't allowed to grant, and allows organizations to implement their own review processes. Always carefully review the permissions the application is requesting before granting consent. When an application has been granted tenant-wide admin consent, all users are able to sign into the application unless it has been configured to require user assignment. ### Single sign-on
-Consider implementing SSO in your application. You can manually configure most applications for SSO. The most popular options in Azure AD are [SAML-based SSO and OpenID Connect-based SSO](../develop/active-directory-v2-protocols.md). Before you start, make sure that you understand the requirements for SSO and how to [plan for deployment](plan-sso-deployment.md). For training related to configuring SAML-based SSO for an enterprise application in your Azure AD tenant, see [Enable single sign-on for an application by using Azure Active Directory](/training/modules/enable-single-sign-on).
+Consider implementing SSO in your application. You can manually configure most applications for SSO. The most popular options in Azure AD are [SAML-based SSO and OpenID Connect-based SSO](../develop/active-directory-v2-protocols.md). Before you start, make sure that you understand the requirements for SSO and how to [plan for deployment](plan-sso-deployment.md). For more information on how to configure SAML-based SSO for an enterprise application in your Azure AD tenant, see [Enable single sign-on for an application by using Azure Active Directory](add-application-portal-setup-sso.md).
### User, group, and owner assignment
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
The following FQDN / application rules are required for using cluster extensions
| **`<region>.dp.kubernetesconfiguration.azure.us`** | **`HTTPS:443`** | This address is used to fetch configuration information from the Cluster Extensions service and report extension status to the service. | | **`mcr.microsoft.com, *.data.mcr.microsoft.com`** | **`HTTPS:443`** | This address is required to pull container images for installing cluster extension agents on AKS cluster.| ++
+> [!NOTE]
+> If any addon does not explicitly stated here, that means the core requirements are covering it.
+ ## Restrict egress traffic using Azure firewall Azure Firewall provides an Azure Kubernetes Service (`AzureKubernetesService`) FQDN Tag to simplify this configuration.
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/update-credentials.md
To check the expiration date of your service principal, use the [az ad sp creden
```azurecli SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \ --query servicePrincipalProfile.clientId -o tsv)
-az ad sp credential list --id "$SP_ID" --query "[].endDate" -o tsv
+az ad sp credential list --id "$SP_ID" --query "[].endDateTime" -o tsv
``` ### Reset the existing service principal credential
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
This policy can be used in the following policy [sections](./api-management-howt
## <a name="SetHttpProxy"></a> Set HTTP proxy
-The `proxy` policy allows you to route requests forwarded to backends via an HTTP proxy. Only HTTP (not HTTPS) is supported between the gateway and the proxy. Basic and NTLM authentication only.
+The `proxy` policy allows you to route requests forwarded to backends via an HTTP proxy. Only HTTP (not HTTPS) is supported between the gateway and the proxy. Basic and NTLM authentication only. To route the send-request via HTTP proxy, you must place the set HTTP proxy policy inside the send-request policy block.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
Previously updated : 01/05/2022 Last updated : 09/27/2022 # Subscriptions in Azure API Management
By publishing APIs through API Management, you can easily secure API access usin
* Rejected immediately by the API Management gateway. * Not forwarded to the back-end services.
-To access APIs, you'll need a subscription and a subscription key. A *subscription* is a named container for a pair of subscription keys.
-
-> [!NOTE]
-> Regularly regenerating keys is a common security precaution. Like most Azure services requiring a subscription key, API Management generates keys in pairs. Each application using the service can switch from *key A* to *key B* and regenerate key A with minimal disruption, and vice versa.
+To access APIs, developers need a subscription and a subscription key. A *subscription* is a named container for a pair of subscription keys.
In addition,
-* Developers can get subscriptions without approval from API publishers.
+* Developers can get subscriptions without needing approval from API publishers.
* API publishers can create subscriptions directly for API consumers. > [!TIP]
In addition,
> - [Client certificates](api-management-howto-mutual-certificates-for-clients.md) > - [Restrict caller IPs](./api-management-access-restriction-policies.md#RestrictCallerIPs)
+## Manage subscription keys
+
+Regularly regenerating keys is a common security precaution. Like most Azure services requiring a subscription key, API Management generates keys in pairs. Each application using the service can switch from *key A* to *key B* and regenerate key A with minimal disruption, and vice versa.
+> [!NOTE]
+> * API Management doesn't provide built-in features to manage the lifecycle of subscription keys, such as setting expiration dates or automatically rotating keys. You can develop workflows to automate these processes using tools such as Azure PowerShell or the Azure SDKs.
+> * To enforce time-limited access to APIs, API publishers may be able to use policies with subscription keys, or use a mechanism that provides built-in expiration such as token-based authentication.
+ ## Scope of subscriptions Subscriptions can be associated with various scopes: [product](api-management-howto-add-products.md), all APIs, or an individual API.
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
This guide shows how to mount Azure Storage Files as a network share in a Window
- Make static content like video and images readily available for your App Service app. - Write application log files or archive older application log to Azure File shares. - Share content across multiple apps or with other Azure services.-- Mount Azure Storage in a Windows container in a Standard tier or higher plan, including Isolated ([App Service environment v3](environment/overview.md)).
+- Mount Azure Storage in a Windows container, including Isolated ([App Service environment v3](environment/overview.md)).
The following features are supported for Windows containers:
app-service Tutorial Connect App Access Microsoft Graph As App Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-app-javascript.md
const appSettings = {
}, authRoutes: { redirect: "/.auth/login/aad/callback", // Enter the redirect URI here
- error: "/error", // enter the relative path to error handling route
unauthorized: "/unauthorized" // enter the relative path to unauthorized route }, }
getAuthenticatedClient = (accessToken) => {
[!INCLUDE [tutorial-clean-up-steps](./includes/tutorial-cleanup.md)]
automation Automation Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-availability-zones.md
+
+ Title: Availability zones support for Azure Automation
+description: This article provides an overview of Azure availability zones and regions for Azure Automation
+keywords: automation availability zones.
++ Last updated : 06/29/2022++++
+# Availability zones support for Azure Automation
+
+Azure Automation uses [Azure availability zones](../availability-zones/az-overview.md#availability-zones) to provide improved resiliency and high availability to a service instance in a specific Azure region.
+
+[Azure availability zones](../availability-zones/az-overview.md#availability-zones) is a
+high-availability offering that protects your applications and data from data center failures.
+Availability zones are unique physical locations within an Azure region and each region comprises of one or more data center(s) equipped with independent power, cooling, and networking. To ensure resiliency, there needs to be a minimum of three separate zones in all enabled regions.
+
+A zone redundant Automation account automatically distributes traffic to the Automation account through various management operations and runbook jobs amongst the availability zones in the supported region. The replication is handled at the service level to these physically separate zones, making the service resilient to a zone failure with no impact on the availability of the Automation accounts in the same region.
+
+In the event when a zone is down, there's no action required by you to recover from a zone failure and the service would be accessible through the other available zones. The service detects that the zone is down and automatically distributes the traffic to the available zones as needed.
+
+## Availability zone considerations
+
+- In all Availability zone supported regions, the zone redundancy for Automation accounts is enabled by default and it can't be disabled. It requires no action from your end as it's enabled and managed by the service.
+- All new Automation accounts with basic SKU are created with zone redundancy natively.
+- All existing Automation accounts would become zone redundant automatically. It requires no action from your end.
+- In a zone-down scenario, you might expect a brief performance degradation until the service self-healing rebalances the underlying capacity to adjust to healthy zones. This isn't dependent on zone restoration; the service self-healing state will compensate for a lost zone, using the capacity from other zones.
+- In a zone-wide failure scenario, you must follow the guidance provided to set up a disaster recovery for Automation accounts in a secondary region.
+- Availability zone support for Automation accounts supports only [Process Automation](/azure/automation/overview#process-automation) feature to provide an improved resiliency for runbook automation.
+
+## Supported regions with availability zones
+
+See [Regions and Availability Zones in Azure](/global-infrastructure/geographies/#geographies) for the Azure regions that have availability zones.
+Automation accounts currently support the following regions in preview:
+
+- China North 3
+- Qatar Central
+- West US 2
+- East US 2
+- East US
+- North Europe
+- West Europe
+- France Central
+- Japan East
+- UK South
+- Southeast Asia
+- Australia East
+- Central US
+- Brazil South
+- Germany West Central
+- West US 3
+
+## Create a zone redundant Automation account
+You can create a zone redundant Automation account using:
+- [Azure portal](/azure/automation/automation-create-standalone-account?tabs=azureportal)
+- [Azure Resource Manager (ARM) template](/azure/automation/quickstart-create-automation-account-template)
+
+> [!Note]
+> There is no option to select or see Availability zone in the creation flow of the Automation Accounts. ItΓÇÖs a default setting enabled and managed at the service level.
+
+## Pricing
+
+There's no additional cost associated to enable the zone redundancy feature in Automation account.
+
+## Service Level Agreement
+
+There is no change to the [Service Level Agreement](https://azure.microsoft.com/support/legal/sla/automation/v1_1/) with the support of Availability zones in Automation Account. The SLA depends on job start time with a guarantee that at least 99.9% of runbook jobs will start within 30 minutes of their planned start times.
+
+## Next steps
+
+- Learn more about [regions that support availability zones](/azure/availability-zones/az-region.md).
automation Disable Local Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/disable-local-authentication.md
# Disable local authentication in Automation
+> [!IMPORTANT]
+> - Update Management patching will not work when local authentication is disabled.
+> - When you disable local authentication, it impacts the starting a runbook using a webhook, Automation Desired State Configuration and agent-based Hybrid Runbook Workers. For more information, see the [available alternatives](#compatibility).
+ Azure Automation provides Microsoft Azure Active Directory (Azure AD) authentication support for all Automation service public endpoints. This critical security enhancement removes certificate dependencies and gives organizations control to disable local authentication methods. This feature provides you with seamless integration when centralized control and management of identities and resource credentials through Azure AD is required. Azure Automation provides an optional feature to "Disable local authentication" at the Automation account level using the Azure policy [Configure Azure Automation account to disable local authentication](../automation/policy-reference.md#azure-automation). By default, this flag is set to false at the account, so you can use both local authentication and Azure AD authentication. If you choose to disable local authentication, then the Automation service only accepts Azure AD based authentication.
In the Azure portal, you may receive a warning message on the landing page for t
Disabling local authentication doesn't take effect immediately. Allow a few minutes for the service to block future authentication requests. >[!NOTE]
-> Currently, PowerShell support for the new API version (2021-06-22) or the flag ΓÇô `DisableLocalAuth` is not available. However, you can use the Rest-API with this API version to update the flag.
+> - Currently, PowerShell support for the new API version (2021-06-22) or the flag ΓÇô `DisableLocalAuth` is not available. However, you can use the Rest-API with this API version to update the flag.
## Re-enable local authentication
The following table describes the behaviors or features that are prevented from
|Using Automation Desired State Configuration.| Use [Azure Policy Guest configuration](../governance/machine-configuration/overview.md).  | |Using agent-based Hybrid Runbook Workers.| Use [extension-based Hybrid Runbook Workers (Preview)](./extension-based-hybrid-runbook-worker-install.md).|
-## Limitations
-
-Update Management patching will not work when local authentication is disabled.
- ## Next steps - [Azure Automation account authentication overview](./automation-security-overview.md)
automation Automation Tutorial Runbook Textual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md
You can use the `ForEach -Parallel` construct to process commands for each item
``` 1. If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you prefer to use a user-assigned managed identity, then:
- 1. From line 9, remove `(Connect-AzAccount -Identity)`,
- 1. Replace it with `(Connect-AzAccount -Identity -AccountId <ClientId>)`, and
+ 1. From line 9, remove `Connect-AzAccount -Identity`,
+ 1. Replace it with `Connect-AzAccount -Identity -AccountId <ClientId>`, and
1. Enter the Client ID you obtained earlier. 1. Select **Save**, then **Publish**, and then **Yes** when prompted.
automation Update Agent Issues Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues-linux.md
This article discusses how to run the troubleshooter for Azure machines from the
## Start the troubleshooter
-For Azure machines, select the **troubleshoot** link under the **Update Agent Readiness** column in the portal to open the Troubleshoot Update Agent page. For non-Azure machines, the link brings you to this article. To troubleshoot a non-Azure machine, see the instructions in the "Troubleshoot offline" section.
+For Azure machines, select the **troubleshoot** link under the **Update Agent Readiness** column in the portal to open the Troubleshoot Update Agent page. For non-Azure machines, the link brings you to this article. To troubleshoot a non-Azure machine, see the instructions in the **Troubleshoot offline** section.
-![VM list page](../media/update-agent-issues-linux/vm-list.png)
> [!NOTE] > The checks require the VM to be running. If the VM isn't running, **Start the VM** appears. On the Troubleshoot Update Agent page, select **Run Checks** to start the troubleshooter. The troubleshooter uses [Run command](../../virtual-machines/linux/run-command.md) to run a script on the machine to verify the dependencies. When the troubleshooter is finished, it returns the result of the checks.
-![Troubleshoot page](../media/update-agent-issues-linux/troubleshoot-page.png)
+ When the checks are finished, the results are returned in the window. The check sections provide information on what each check is looking for.
-![Update agent checks page](../media/update-agent-issues-linux/update-agent-checks.png)
+ ## Prerequisite checks
When the checks are finished, the results are returned in the window. The check
The operating system check verifies if the Hybrid Runbook Worker is running one of the [supported operating systems](../update-management/operating-system-requirements.md#supported-operating-systems).
+### Dmidecode check
+
+To verify if a VM is an Azure VM, check for Asset tag value using the below command:
+
+```
+sudo dmidecode
+```
+
+If the asset tag is different than 7783-7084-3265-9085-8269-3286-77, then reboot VM to initiate re-registration.
++ ## Monitoring agent service health checks
-### Log Analytics agent
+### Monitoring Agent
+
+To fix this, install Azure Log Analytics Linux agent and ensure it communicates the required endpoints. For more information, see [Install Log Analytics agent on Linux computers](../../azure-monitor/agents/agent-linux.md).
-This check ensures that the Log Analytics agent for Linux is installed. For instructions on how to install it, see [Install the agent for Linux](../../azure-monitor/vm/monitor-virtual-machine.md#agents).
+This task checks if the folder is present -
-### Log Analytics agent status
+*/etc/opt/microsoft/omsagent/conf/omsadmin.conf*
-This check ensures that the Log Analytics agent for Linux is running. If the agent isn't running, you can run the following command to attempt to restart it. For more information on troubleshooting the agent, see [Linux - Troubleshoot Hybrid Runbook Worker issues](hybrid-runbook-worker.md#linux).
+### Monitoring Agent status
+
+To fix this issue, you must start the OMS Agent service by using the following command:
-```bash
-sudo /opt/microsoft/omsagent/bin/service_control restart
+```
+ sudo /opt/microsoft/omsagent/bin/service_control restart
```
-### Multihoming
+To validate you can perform process check using the below command:
+
+```
+process_name = "omsagent"
+ps aux | grep %s | grep -v grep" % (process_name)
+```
+For more information, see [Troubleshoot issues with the Log Analytics agent for Linux](../../azure-monitor/agents/agent-linux-troubleshoot.md)
++
+### Multihoming
This check determines if the agent is reporting to multiple workspaces. Update Management doesn't support multihoming.
+To fix this issue, purge the OMS Agent completely and reinstall it with the [workspace linked with Update management](../../azure-monitor/agents/agent-linux-troubleshoot.md#purge-and-reinstall-the-linux-agent)
++
+Validate that there are no more multihoming by checking the directories under this path:
+
+ */var/opt/microsoft/omsagent*.
+
+As they are the directories of workspaces, the number of directories equals the number of workspaces on-boarded to OMSAgent.
+ ### Hybrid Runbook Worker
+To fix the issue, run the following command:
-This check verifies if the Log Analytics agent for Linux has the Hybrid Runbook Worker package. This package is required for Update Management to work. To learn more, see [Log Analytics agent for Linux isn't running](hybrid-runbook-worker.md#oms-agent-not-running).
+```
+sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/ PerformRequiredConfigurationChecks.py
+```
-Update Management downloads Hybrid Runbook Worker packages from the operations endpoint. Therefore, if the Hybrid Runbook Worker is not running and the [operations endpoint](#operations-endpoint) check fails, the update can fail.
+This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
+
+Validate to check if the following two paths exists:
+
+```
+/opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/VERSION </br> /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/configuration.py
+```
### Hybrid Runbook Worker status This check makes sure the Hybrid Runbook Worker is running on the machine. The processes in the example below should be present if the Hybrid Runbook Worker is running correctly.
+```
+ps -ef | grep python
+```
-```bash
+```
nxautom+ 8567 1 0 14:45 ? 00:00:00 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/main.py /var/opt/microsoft/omsagent/state/automationworker/oms.conf rworkspace:<workspaceId> <Linux hybrid worker version> nxautom+ 8593 1 0 14:45 ? 00:00:02 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/hybridworker.py /var/opt/microsoft/omsagent/state/automationworker/worker.conf managed rworkspace:<workspaceId> rversion:<Linux hybrid worker version> nxautom+ 8595 1 0 14:45 ? 00:00:02 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/hybridworker.py /var/opt/microsoft/omsagent/<workspaceId>/state/automationworker/diy/worker.conf managed rworkspace:<workspaceId> rversion:<Linux hybrid worker version> ```
+Update Management downloads Hybrid Runbook Worker packages from the operations endpoint. Therefore, if the Hybrid Runbook Worker is not running and the [operations endpoint](#operations-endpoint) check fails, the update can fail.
+
+To fix this issue, run the following command:
+
+```
+sudo su omsagent -c python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py
+```
+
+This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
+
+If the issue still persists, run the [omsagent Log Collector tool](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/tools/LogCollector/OMS_Linux_Agent_Log_Collector.md)
+++ ## Connectivity checks
+### Proxy enabled check
+
+To fix the issue, either remove the proxy or make sure that the proxy address is able to access the [prerequisite URL](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory).
+
+You can validate the task by running the below command:
+
+```
+HTTP_PROXY
+```
+
+### IMDS connectivity check
+
+To fix this issue, allow access to IP **169.254.169.254**. For more information, see [Access Azure Instance Metadata Service](../../virtual-machines/windows/instance-metadata-service.md#azure-instance-metadata-service-windows)
+
+After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
+
+```
+ curl -H \"Metadata: true\" http://169.254.169.254/metadata/instance?api-version=2018-02-01
+```
+ ### General internet connectivity
-This check makes sure that the machine has access to the internet.
+This check makes sure that the machine has access to the internet and can be ignored if you have blocked internet and allowed only specific URLs.
+
+CURL on any http url.
### Registration endpoint This check determines if the Hybrid Runbook Worker can properly communicate with Azure Automation in the Log Analytics workspace.
-Proxy and firewall configurations must allow the Hybrid Runbook Worker agent to communicate with the registration endpoint. For a list of addresses and ports to open, see [Network planning](../automation-hybrid-runbook-worker.md#network-planning).
+Proxy and firewall configurations must allow the Hybrid Runbook Worker agent to communicate with the registration endpoint. For a list of addresses and ports to open, see [Network planning](../automation-hybrid-runbook-worker.md#network-planning)
+
+Fix this issue by allowing the prerequisite URLs. For more information, see [Update Management and Change Tracking and Inventory](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory)
+
+Post the network changes you can either re-run the troubleshooter or CURL on provided jrds endpoint.
### Operations endpoint
Proxy and firewall configurations must allow the Hybrid Runbook Worker agent to
This check verifies that your machine has access to the endpoints needed by the Log Analytics agent.
+Fix this issue by allowing the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory).
+
+Post making Network changes you can either rerun the Troubleshooter or
+Curl on provided OMS endpoint
+ ### Log Analytics endpoint 2 This check verifies that your machine has access to the endpoints needed by the Log Analytics agent.
-### Log Analytics endpoint 3
+Fix this issue by allowing the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory).
-This check verifies that your machine has access to the endpoints needed by the Log Analytics agent.
+Post making Network changes you can either rerun the Troubleshooter or
+Curl on provided OMS endpoint
++
+### Software repositories
+
+Fix this issue by allowing the prerequisite Repo URL. For RHEL, see [here](https://learn.microsoft.com/azure/virtual-machines/workloads/redhat/redhat-rhui#troubleshoot-connection-problems-to-azure-rhui).
+
+Post making Network changes you can either rerun the Troubleshooter or
+
+Curl on software repositories configured in package manager.
+
+Refreshing repos would help to confirm the communication.
+
+```
+sudo apt-get check
+sudo yum check-update
+```
+> [!NOTE]
+> The check is available only in offline mode.
## <a name="troubleshoot-offline"></a>Troubleshoot offline You can use the troubleshooter offline on a Hybrid Runbook Worker by running the script locally. The Python script, [UM_Linux_Troubleshooter_Offline.py](https://github.com/Azure/updatemanagement/blob/main/UM_Linux_Troubleshooter_Offline.py), can be found in GitHub.
-> [!NOTE]
-> The current version of the troubleshooter script does not support Ubuntu 20.04.
->
+ > [!NOTE]
+ > The current version of the troubleshooter script does not support Ubuntu 20.04.
+ An example of the output of this script is shown in the following example:
automation Update Agent Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues.md
There can be many reasons why your machine isn't showing up as ready (healthy) d
This article discusses how to run the troubleshooter for Azure machines from the Azure portal, and non-Azure machines in the [offline scenario](#troubleshoot-offline). > [!NOTE]
-> The troubleshooter script now includes checks for Windows Server Update Services (WSUS) and for the autodownload and install keys.
+> The troubleshooter script now includes checks for Windows Server Update Services (WSUS) and for the auto download and install keys.
## Start the troubleshooter For Azure machines, you can launch the Troubleshoot Update Agent page by selecting the **Troubleshoot** link under the **Update Agent Readiness** column in the portal. For non-Azure machines, the link brings you to this article. See [Troubleshoot offline](#troubleshoot-offline) to troubleshoot a non-Azure machine.
-![Screenshot of the Update Management list of virtual machines](../media/update-agent-issues/vm-list.png)
> [!NOTE] > To check the health of the Hybrid Runbook Worker, the VM must be running. If the VM isn't running, a **Start the VM** button appears. On the Troubleshoot Update Agent page, select **Run checks** to start the troubleshooter. The troubleshooter uses [Run Command](../../virtual-machines/windows/run-command.md) to run a script on the machine, to verify dependencies. When the troubleshooter is finished, it returns the result of the checks.
-![Screenshot of the Troubleshoot Update Agent page](../media/update-agent-issues/troubleshoot-page.png)
Results are shown on the page when they're ready. The checks sections show what's included in each check.
-![Screenshot of the Troubleshoot Update Agent checks](../media/update-agent-issues/update-agent-checks.png)
## Prerequisite checks ### Operating system
-The operating system check verifies whether the Hybrid Runbook Worker is running [one of the supported operating systems.](../update-management/operating-system-requirements.md)
-one of the supported operating systems
+The operating system check verifies whether the Hybrid Runbook Worker is running [one of the supported operating systems](../update-management/operating-system-requirements.md#system-requirements)
### .NET 4.6.2 The .NET Framework check verifies that the system has [.NET Framework 4.6.2](https://dotnet.microsoft.com/download/dotnet-framework/net462) or later installed.
+To fix, install .NET Framework 4.6 or later. </br> Download the [.NET Framework](https://www.docs.microsoft.com/dotnet/framework/install/guide-for-developers).
+ ### WMF 5.1
-The WMF check verifies that the system has the required version of the Windows Management Framework (WMF), which is [Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616).
+The WMF check verifies that the system has the required version of the Windows Management Framework (WMF).
+
+To fix, you need to download and install [Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616) as it requires Windows PowerShell 5.1 for Azure Update Management to work.
### TLS 1.2 This check determines whether you're using TLS 1.2 to encrypt your communications. TLS 1.0 is no longer supported by the platform. Use TLS 1.2 to communicate with Update Management.
+To fix, follow the steps to [Enable TLS 1.2](../../azure-monitor/agents/agent-windows.md#configure-agent-to-use-tls-12)
++
+## Monitoring agent service health checks
+
+### Monitoring Agent
+To fix the issue, start **HealthService** service
+
+```
+Start-Service -Name *HealthService* -ErrorAction SilentlyContinue
+```
+
+### Hybrid Runbook Worker
+To fix the issue, do a force re-registration of Hybrid Runbook Worker.
+
+```
+Remove-Item -Path "HKLM:\software\microsoft\hybridrunbookworker" -Recurse -Force
+Restart-service healthservice
+
+```
+
+>[!NOTE]
+> This will remove the user Hybrid worker from the machine. Ensure to check and re-register it afterwards. There is no action needed if the machine has only the System Hybrid Runbook worker.
+
+To validate, check event id *15003 (HW start event) OR 15004 (hw stopped event) EXISTS in Microsoft-SMA/Operational event logs.*
+
+Raise a support ticket if the issue is not fixed still.
+
+### Monitoring Agent Service
+
+Check the event id 4502 (error event) in **Operations Manager** event logs and check the description.
+
+To troubleshoot, run the [MMA Agent Troubleshooter](../../azure-monitor/agents/agent-windows-troubleshoot.md).
+
+### VMs linked workspace
+See [Network requirements](../../azure-monitor/agents/agent-windows-troubleshoot.md#connectivity-issues).
+
+To validate: Check VMs connected workspace or Heartbeat table of corresponding log analytics.
+
+```
+Heartbeat | where Computer =~ ""
+```
+
+### Windows update service status
+
+ To fix this issue, start **wuaserv** service.
+
+```
+Start-Service -Name wuauserv -ErrorAction SilentlyContinue
+```
+ ## Connectivity checks
+The troubleshooter currently doesn't route traffic through a proxy server if one is configured.
+ ### Registration endpoint This check determines whether the agent can properly communicate with the agent service. Proxy and firewall configurations must allow the Hybrid Runbook Worker agent to communicate with the registration endpoint. For a list of addresses and ports to open, see [Network planning](../automation-hybrid-runbook-worker.md#network-planning).
+Allow the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory).
+After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
+
+```
+$workspaceId =- ""ΓÇ»
+$endpoint = $workspaceId + ΓÇ£.agentsvc.azure-automation.netΓÇ¥ΓÇ»
+(Test-NetConnection -ComputerName $endpoint -Port 443 -WarningAction SilentlyContinue).TcpTestSucceeded
+```
+ ### Operations endpoint This check determines whether the agent can properly communicate with the Job Runtime Data Service. Proxy and firewall configurations must allow the Hybrid Runbook Worker agent to communicate with the Job Runtime Data Service. For a list of addresses and ports to open, see [Network planning](../automation-hybrid-runbook-worker.md#network-planning).
+Allow the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory). After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
+
+```
+$jrdsEndpointLocationMoniker = ΓÇ£ΓÇ¥
+
+# $jrdsEndpointLocationMoniker should be based on automation account location (jpe/ase/scus) etc.ΓÇ»
+
+$endpoint = $jrdsEndpointLocationMoniker + ΓÇ£-jobruntimedata-prod-su1.azure-automation.netΓÇ¥ΓÇ»
+
+(Test-NetConnection -ComputerName $endpoint -Port 443 -WarningAction SilentlyContinue).TcpTestSucceeded
+```
+
+### Https connection
+Simplifies the ongoing management of your network security rules. Allow the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory).
+
+After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
+
+```
+$uri = "https://eus2-jobruntimedata-prod-su1.azure-automation.net"
+Invoke-WebRequest -URI $uri -UseBasicParsing
+```
++
+### Proxy settings
+
+If the proxy is enabled, ensure that you have access to the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory)
++
+To check if the proxy is set correctly, use the below commands:
+
+```
+netsh winhttp show proxy
+```
+
+or check the registry key **ProxyEnable** is set to 1 in
+
+```
+HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings
+
+```
+
+### IMDS endpoint connectivity
+
+To fix the issue, allow access to IP **169.254.169.254** </br> For more information see, [access Azure instance metadata service](../../virtual-machines/windows/instance-metadata-service.md#access-azure-instance-metadata-service)
++
+After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
+
+```
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri http://169.254.169.254/metadata/instance?api-version=2018-02-01
+```
+ ## VM service health checks ### Monitoring agent service status
To learn more about this event, see the [Event 4502 in the Operations Manager lo
## Access permissions checks
-> [!NOTE]
-> The troubleshooter currently doesn't route traffic through a proxy server if one is configured.
+### Machine key folder
+
+This check determines whether the local system account has access to: *C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys*
+
+To fix, grant the SYSTEM account the required permissions (Read, Write & Modify or Full Control) on folderΓÇ»*C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys*
+
+Use the below commands to check the permissions on the folder:
+
+```azurepowershell
+
+$folder = ΓÇ£C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeysΓÇ¥
+
+(Get-Acl $folder).Access |? {($_.IdentityReference -match $User) -or ($_.IdentityReference -match "Everyone")} | Select IdentityReference, FileSystemRights
+
+```
+
+## Machine Update settings
+
+### Automatically reboot after install
-### Crypto folder access
+To fix, remove the registry keys from:
+*HKLM:\Software\Policies\Microsoft\Windows\WindowsUpdate\AU*
+
+Configure reboot according to Update Management schedule configuration.
+
+```
+AlwaysAutoRebootAtScheduledTimeΓÇ»
+AlwaysAutoRebootAtScheduledTimeMinutes
+```
+
+For more information, seeΓÇ»[Configure reboot settings](../update-management/configure-wuagent.md#configure-reboot-settings)
++
+### WSUS server configuration
+
+If the environment is set to get updates from WSUS, ensure that it is approved in WSUS before the update deployment. For more information, see [WSUS configuration settings](../update-management/configure-wuagent.md#make-wsus-configuration-settings). If your environment is not using WSUS, ensure that you remove the WSUS server settings and [reset Windows update component](https://learn.microsoft.com/windows/deployment/update/windows-update-resources#how-do-i-reset-windows-update-components).
+
+### Automatically download and install
+
+To fix the issue, disable the **AutoUpdate** feature. Set it to Disabled in the local group policy Configure Automatic Updates. For more information, see [Configure automatic updates](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates).
-The Crypto folder access check determines whether the local system account has access to C:\ProgramData\Microsoft\Crypto\RSA.
## <a name="troubleshoot-offline"></a>Troubleshoot offline
-You can use the troubleshooter on a Hybrid Runbook Worker offline by running the script locally. Get the following script from GitHub: [UM_Windows_Troubleshooter_Offline.ps1](https://github.com/Azure/updatemanagement/blob/main/UM_Windows_Troubleshooter_Offline.ps1). To run the script, you must have WMF 4.0 or later installed. To download the latest version of PowerShell, see [Installing various versions of PowerShell](/powershell/scripting/install/installing-powershell).
+You can use the troubleshooter on a Hybrid Runbook Worker offline by running the script locally. Get the following script from GitHub: [UM_Windows_Troubleshooter_Offline.ps1](https://github.com/Azure/updatemanagement/blob/main/UM_Windows_Troubleshooter_Offline.ps1). To run the script, you must have WMF 5.0 or later installed. To download the latest version of PowerShell, see [Installing various versions of PowerShell](/powershell/scripting/install/installing-powershell).
The output of this script looks like the following example: ```output
-RuleId : OperatingSystemCheck
-RuleGroupId : prerequisites
-RuleName : Operating System
-RuleGroupName : Prerequisite Checks
-RuleDescription : The Windows Operating system must be version 6.2.9200 (Windows Server 2012) or higher
-CheckResult : Passed
-CheckResultMessage : Operating System version is supported
-CheckResultMessageId : OperatingSystemCheck.Passed
-CheckResultMessageArguments : {}
-
-RuleId : DotNetFrameworkInstalledCheck
-RuleGroupId : prerequisites
-RuleName : .NET Framework 4.5+
-RuleGroupName : Prerequisite Checks
-RuleDescription : .NET Framework version 4.5 or higher is required
-CheckResult : Passed
-CheckResultMessage : .NET Framework version 4.5+ is found
-CheckResultMessageId : DotNetFrameworkInstalledCheck.Passed
-CheckResultMessageArguments : {}
-
-RuleId : WindowsManagementFrameworkInstalledCheck
-RuleGroupId : prerequisites
-RuleName : WMF 5.1
-RuleGroupName : Prerequisite Checks
-RuleDescription : Windows Management Framework version 4.0 or higher is required (version 5.1 or higher is preferable)
-CheckResult : Passed
-CheckResultMessage : Detected Windows Management Framework version: 5.1.17763.1
-CheckResultMessageId : WindowsManagementFrameworkInstalledCheck.Passed
-CheckResultMessageArguments : {5.1.17763.1}
-
-RuleId : AutomationAgentServiceConnectivityCheck1
-RuleGroupId : connectivity
-RuleName : Registration endpoint
-RuleGroupName : connectivity
-RuleDescription :
-CheckResult : Failed
-CheckResultMessage : Unable to find Workspace registration information in registry
-CheckResultMessageId : AutomationAgentServiceConnectivityCheck1.Failed.NoRegistrationFound
-CheckResultMessageArguments : {}
-
-RuleId : AutomationJobRuntimeDataServiceConnectivityCheck
-RuleGroupId : connectivity
-RuleName : Operations endpoint
-RuleGroupName : connectivity
-RuleDescription : Proxy and firewall configuration must allow Automation Hybrid Worker agent to communicate with eus2-jobruntimedata-prod-su1.azure-automation.net
-CheckResult : Passed
-CheckResultMessage : TCP Test for eus2-jobruntimedata-prod-su1.azure-automation.net (port 443) succeeded
-CheckResultMessageId : AutomationJobRuntimeDataServiceConnectivityCheck.Passed
-CheckResultMessageArguments : {eus2-jobruntimedata-prod-su1.azure-automation.net}
-
-RuleId : MonitoringAgentServiceRunningCheck
-RuleGroupId : servicehealth
-RuleName : Monitoring Agent service status
-RuleGroupName : VM Service Health Checks
-RuleDescription : HealthService must be running on the machine
-CheckResult : Failed
-CheckResultMessage : Log Analytics for Windows service (HealthService) is not running
-CheckResultMessageId : MonitoringAgentServiceRunningCheck.Failed
-CheckResultMessageArguments : {Log Analytics agent for Windows, HealthService}
-
-RuleId : MonitoringAgentServiceEventsCheck
-RuleGroupId : servicehealth
-RuleName : Monitoring Agent service events
-RuleGroupName : VM Service Health Checks
-RuleDescription : Event Log must not have event 4502 logged in the past 24 hours
-CheckResult : Failed
-CheckResultMessage : Log Analytics agent for Windows service Event Log (Operations Manager) does not exist on the machine
-CheckResultMessageId : MonitoringAgentServiceEventsCheck.Failed.NoLog
-CheckResultMessageArguments : {Log Analytics agent for Windows, Operations Manager, 4502}
-
-RuleId : CryptoRsaMachineKeysFolderAccessCheck
-RuleGroupId : permissions
-RuleName : Crypto RSA MachineKeys Folder Access
-RuleGroupName : Access Permission Checks
-RuleDescription : SYSTEM account must have WRITE and MODIFY access to 'C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys'
-CheckResult : Passed
-CheckResultMessage : Have permissions to access C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys
-CheckResultMessageId : CryptoRsaMachineKeysFolderAccessCheck.Passed
-CheckResultMessageArguments : {C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys}
-
-RuleId : TlsVersionCheck
-RuleGroupId : prerequisites
-RuleName : TLS 1.2
-RuleGroupName : Prerequisite Checks
-RuleDescription : Client and Server connections must support TLS 1.2
-CheckResult : Passed
-CheckResultMessage : TLS 1.2 is enabled by default on the Operating System.
-CheckResultMessageId : TlsVersionCheck.Passed.EnabledByDefault
-CheckResultMessageArguments : {}
-```
+RuleId : OperatingSystemCheck
+RuleGroupId : prerequisites
+RuleName : Operating System
+RuleGroupName : Prerequisite Checks
+RuleDescription : The Windows Operating system must be version 6.1.7600 (Windows Server 2008 R2) or higher
+CheckResult : Passed
+CheckResultMessage : Operating System version is supported
+CheckResultMessageId : OperatingSystemCheck.Passed
+CheckResultMessageArguments : {}
+
+
+
+RuleId : DotNetFrameworkInstalledCheck
+RuleGroupId : prerequisites
+RuleName : .Net Framework 4.6.2+
+RuleGroupName : Prerequisite Checks
+RuleDescription : .NET Framework version 4.6.2 or higher is required
+CheckResult : Passed
+CheckResultMessage : .NET Framework version 4.6.2+ is found
+CheckResultMessageId : DotNetFrameworkInstalledCheck.Passed
+CheckResultMessageArguments : {}
+
+
+
+RuleId : WindowsManagementFrameworkInstalledCheck
+RuleGroupId : prerequisites
+RuleName : WMF 5.1
+RuleGroupName : Prerequisite Checks
+RuleDescription : Windows Management Framework version 4.0 or higher is required (version 5.1 or higher is preferable)
+CheckResult : Passed
+CheckResultMessage : Detected Windows Management Framework version: 5.1.22621.169
+CheckResultMessageId : WindowsManagementFrameworkInstalledCheck.Passed
+CheckResultMessageArguments : {5.1.22621.169}
+
+
+
+RuleId : AutomationAgentServiceConnectivityCheck1
+RuleGroupId : connectivity
+RuleName : Registration endpoint
+RuleGroupName : connectivity
+RuleDescription :
+CheckResult : Failed
+CheckResultMessage : Unable to find Workspace registration information
+CheckResultMessageId : AutomationAgentServiceConnectivityCheck1.Failed.NoRegistrationFound
+CheckResultMessageArguments :
+
+
+
+RuleId : AutomationJobRuntimeDataServiceConnectivityCheck
+RuleGroupId : connectivity
+RuleName : Operations endpoint
+RuleGroupName : connectivity
+RuleDescription : Proxy and firewall configuration must allow Automation Hybrid Worker agent to communicate with
+ eus2-jobruntimedata-prod-su1.azure-automation.net
+CheckResult : Passed
+CheckResultMessage : TCP Test for eus2-jobruntimedata-prod-su1.azure-automation.net (port 443) succeeded
+CheckResultMessageId : AutomationJobRuntimeDataServiceConnectivityCheck.Passed
+CheckResultMessageArguments : {eus2-jobruntimedata-prod-su1.azure-automation.net}
+
+
+
+RuleId : MonitoringAgentServiceRunningCheck
+RuleGroupId : servicehealth
+RuleName : Monitoring Agent service status
+RuleGroupName : VM Service Health Checks
+RuleDescription : HealthService must be running on the machine
+CheckResult : Passed
+CheckResultMessage : Microsoft Monitoring Agent service (HealthService) is running
+CheckResultMessageId : MonitoringAgentServiceRunningCheck.Passed
+CheckResultMessageArguments : {Microsoft Monitoring Agent, HealthService}
+
+
+
+RuleId : SystemHybridRunbookWorkerRunningCheck
+RuleGroupId : servicehealth
+RuleName : Hybrid runbook worker status
+RuleGroupName : VM Service Health Checks
+RuleDescription : Hybrid runbook worker must be in running state.
+CheckResult : Passed
+CheckResultMessage : Hybrid runbook worker is running.
+CheckResultMessageId : SystemHybridRunbookWorkerRunningCheck.Passed
+CheckResultMessageArguments : {}
+
+
+
+RuleId : MonitoringAgentServiceEventsCheck
+RuleGroupId : servicehealth
+RuleName : Monitoring Agent service events
+RuleGroupName : VM Service Health Checks
+RuleDescription : Event Log must not have event 4502 logged in the past 24 hours
+CheckResult : Passed
+CheckResultMessage : Microsoft Monitoring Agent service Event Log (Operations Manager) does not have event 4502 logged in the last 24 hours.
+CheckResultMessageId : MonitoringAgentServiceEventsCheck.Passed
+CheckResultMessageArguments : {Microsoft Monitoring Agent, Operations Manager, 4502}
+
+
+
+RuleId : LinkedWorkspaceCheck
+RuleGroupId : servicehealth
+RuleName : VM's Linked Workspace
+RuleGroupName : VM Service Health Checks
+RuleDescription : Get linked workspace info of the VM
+CheckResult : Failed
+CheckResultMessage : VM is not reporting to any workspace.
+CheckResultMessageId : LinkedWorkspaceCheck.Failed.NoWorkspace
+CheckResultMessageArguments : {}
+
+
+RuleId : CryptoRsaMachineKeysFolderAccessCheck
+RuleGroupId : permissions
+RuleName : Crypto RSA MachineKeys Folder Access
+RuleGroupName : Access Permission Checks
+RuleDescription : SYSTEM account must have WRITE and MODIFY access to 'C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys'
+CheckResult : Passed
+CheckResultMessage : Have permissions to access C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys
+CheckResultMessageId : CryptoRsaMachineKeysFolderAccessCheck.Passed
+CheckResultMessageArguments : {C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys}
++
+RuleId : TlsVersionCheck
+RuleGroupId : prerequisites
+RuleName : TLS 1.2
+RuleGroupName : Prerequisite Checks
+RuleDescription : Client and Server connections must support TLS 1.2
+CheckResult : Passed
+CheckResultMessage : TLS 1.2 is enabled by default on the Operating System.
+CheckResultMessageId : TlsVersionCheck.Passed.EnabledByDefault
+CheckResultMessageArguments : {}
+
+
+RuleId : AlwaysAutoRebootCheck
+RuleGroupId : machineSettings
+RuleName : AutoReboot
+RuleGroupName : Machine Override Checks
+RuleDescription : Automatic reboot should not be enable as it forces a reboot irrespective of update configuration
+CheckResult : Passed
+CheckResultMessage : Windows Update reboot registry keys are not set to automatically reboot
+CheckResultMessageId : AlwaysAutoRebootCheck.Passed
+CheckResultMessageArguments :
+
+
+
+RuleId : WSUSServerConfigured
+RuleGroupId : machineSettings
+RuleName : isWSUSServerConfigured
+RuleGroupName : Machine Override Checks
+RuleDescription : Increase awareness on WSUS configured on the server
+CheckResult : Passed
+CheckResultMessage : Windows Updates are downloading from the default Windows Update location. Ensure the server has access to the Windows Update service
+CheckResultMessageId : WSUSServerConfigured.Passed
+CheckResultMessageArguments :
+
+
+
+RuleId : AutomaticUpdateCheck
+RuleGroupId : machineSettings
+RuleName : AutoUpdate
+RuleGroupName : Machine Override Checks
+RuleDescription : AutoUpdate should not be enabled on the machine
+CheckResult : Passed
+CheckResultMessage : Windows Update is not set to automatically install updates as they become available
+CheckResultMessageId : AutomaticUpdateCheck.Passed
+CheckResultMessageArguments :
+
+
+
+RuleId : HttpsConnection
+RuleGroupId : connectivity
+RuleName : Https connection
+RuleGroupName : connectivity
+RuleDescription : Check if VM is able to make https requests.
+CheckResult : Passed
+CheckResultMessage : VM is able to make https requests.
+CheckResultMessageId : HttpsConnection.Passed
+CheckResultMessageArguments : {}
+
+
+
+RuleId : ProxySettings
+RuleGroupId : connectivity
+RuleName : Proxy settings
+RuleGroupName : connectivity
+RuleDescription : Check if Proxy is enabled on the VM.
+CheckResult : Passed
+CheckResultMessage : Proxy is not set.
+CheckResultMessageId : ProxySettings.Passed
+CheckResultMessageArguments : {}
+
+
+RuleId : IMDSConnectivity
+RuleGroupId : connectivity
+RuleName : IMDS endpoint connectivity
+RuleGroupName : connectivity
+RuleDescription : Check if VM is able to reach IMDS server to get VM information.
+CheckResult : PassedWithWarning
+CheckResultMessage : VM is not able to reach IMDS server. Consider this as a Failure if this is an Azure VM.
+CheckResultMessageId : IMDSConnectivity.PassedWithWarning
+CheckResultMessageArguments : {}
+
+
+
+RuleId : WUServiceRunningCheck
+RuleGroupId : servicehealth
+RuleName : WU service status
+RuleGroupName : WU Service Health Check
+RuleDescription : WU must not be in the disabled state.
+CheckResult : Passed
+CheckResultMessage : Windows Update service (wuauserv) is running.
+CheckResultMessageId : WUServiceRunningCheck.Passed
+CheckResultMessageArguments : {Windows Update, wuauserv}
+
+
+RuleId : LAOdsEndpointConnectivity
+RuleGroupId : connectivity
+RuleName : LA ODS endpoint
+RuleGroupName : connectivity
+RuleDescription : Proxy and firewall configuration must allow to communicate with LA ODS endpoint
+CheckResult : Failed
+CheckResultMessage : Unable to find Workspace registration information
+CheckResultMessageId : LAOdsEndpointConnectivity.Failed
+CheckResultMessageArguments :
+
+
+RuleId : LAOmsEndpointConnectivity
+RuleGroupId : connectivity
+RuleName : LA OMS endpoint
+RuleGroupName : connectivity
+RuleDescription : Proxy and firewall configuration must allow to communicate with LA OMS endpoint
+CheckResult : Failed
+CheckResultMessage : Unable to find Workspace registration information
+CheckResultMessageId : LAOmsEndpointConnectivity.Failed
+CheckResultMessageArguments :
+ ```
## Next steps
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
+## September 2022
+### Availability zones support for Azure Automation
+
+Azure Automation now supports [Azure availability zones](../availability-zones/az-overview.md#availability-zones) to provide improved resiliency and high availability to a service instance in a specific Azure region. [Learn more](https://learn.microsoft.com/azure/automation/automation-availability-zones).
## July 2022
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
Title: Connected Machine agent network requirements description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 09/09/2022 Last updated : 09/29/2022
The table below lists the URLs that must be available in order to install and us
|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service for extension and connectivity scenarios|Always| Public | |`azgn*.servicebus.windows.net`|Notification service for extension and connectivity scenarios|Always| Public | |`*.servicebus.windows.net`|For Windows Admin Center and SSH scenarios|If using SSH or Windows Admin Center from Azure|Public|
+|`*.waconazure.com`|For Windows Admin Center connectivity|If using Windows Admin Center|Public|
|`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured | |`dc.services.visualstudio.com`|Agent telemetry|Optional| Public |
azure-cache-for-redis Cache How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md
Previously updated : 09/08/2022 Last updated : 09/29/2022
Azure Cache for Redis supports upgrading the version of your Azure Cache for Redis from Redis 4 to Redis 6. Upgrading is permanent, and it might cause a brief connection issue similar to regular monthly maintenance. As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading.
-For more information, see [here](cache-how-to-import-export-data.md) for details on how to export.
+For more details on how to export, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
+
+> [!IMPORTANT]
+> As announced in [What's new](cache-whats-new.md#upgrade-your-azure-cache-for-redis-instances-to-use-redis-version-6-by-june-30-2023), we'll retire version 4 for Azure Cache for Redis instances on June 30, 2023. Before that date, you need to upgrade any of your cache instances to version 6.
+>
+> For more information on the retirement of Redis 4, see [Retirements](cache-retired-features.md).
+>
## Prerequisites
azure-cache-for-redis Cache Retired Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-retired-features.md
+
+ Title: What's been retired from Azure Cache for Redis?
+
+description: Information on retirements from Azure Cache for Redis
+++++ Last updated : 09/29/2022+++
+# Retirements
+
+## Redis version 4
+
+On June 30, 2023, we'll retire version 4 for Azure Cache for Redis instances. Before that date, you need to upgrade any of your cache instances to version 6.
+
+- All cache instances running Redis version 4 after June 30, 2023 will be upgraded automatically.
+- All cache instances running Redis version 4 that have geo-replication enabled will be upgraded automatically after August 30, 2023.
+
+We recommend that you upgrade your caches on your own to accommodate your schedule and the needs of your users to make the upgrade as convenient as possible.
+
+The open-source Redis version 4 was released several years ago and is now retired. Version 4 no longer receives critical bug or security fixes from the community. Azure Cache for Redis offers open-source Redis as a managed service on Azure. To stay in sync with the open-source offering, we'll also retire version 4.
+<!-- Can we be more specific about when open-source Redis 4 was retired -->
+
+Microsoft continues to backport security fixes from recent versions to version 4 until retirement. We encourage you to upgrade your cache to version 6 sooner, so you can use the rich feature set that Redis version 6 has to offer. For more information, See the Redis 6 GA announcement for more details.
+
+To upgrade your version 4 Azure Cache for Redis instance, see our [Upgrade an existing Redis 4 cache to Redis 6](cache-how-to-version.md#upgrade-an-existing-redis-4-cache-to-redis-6. If your cache instances have geo-replication enabled, youΓÇÖre required to unlink the caches before upgrade.
+
+### Important upgrade timelines
+
+From now through 30 June 2023, you can continue to use existing Azure Cache for Redis version 4 instances. Retirement will occur in following stages, so you have the maximum amount of time to upgrade.
+
+| Date | Description |
+|-- |-|
+| November 1. 2022 | Beginning November 1, 2022, all the versions of Azure Cache for Redis REST API, PowerShell, Azure CLI, and Azure SDK will create Redis instances using Redis version 6 by default. If you need a specific Redis version for your cache instance, see [Redis 6 becomes default for new cache instances](cache-whats-new.md#redis-6-becomes-default-for-new-cache-instances). |
+| March 1, 2023 | Beginning March 1, 2023, you won't be able to create new Azure Cache for Redis instances using Redis version 4. Also, you wonΓÇÖt be able to create new geo-replication links between cache instances using Redis version 4.|
+| June 30, 2023 | After June 30 2023, any remaining version 4 cache instances, which don't have geo-replication links, will be automatically upgraded to version 6.|
+| August 30, 2023 |After August 30, 2023, any remaining version 4 cache instances, which have geo-replication links, will be automatically upgraded to version 6. This upgrade operation will require unlinking and relinking the caches and customers could experience geo-replication link down time. |
+
+### Version 4 caches on cloud services
+
+If your cache instance is affected by the Cloud Service retirement, you're unable to upgrade to Redis 6 until after you migrate to a cache built on virtual machine scale set. In this case, send mail to azurecachemigration@microsoft.com, and we'll help you with the migration.
+
+For more information on what to do if your cache is on Cloud Services (classic), see [Azure Cache for Redis on Cloud Services (classic)](cache-faq.yml#what-should-i-do-with-any-instances-of-azure-cache-for-redis-that-depend-on-cloud-services--classic-).
+
+### How to check if a cache is running on version 4?
+
+You check the Redis version of your cache instance by selecting **Properties** from the resource menu in the Azure Cache for Redis portal.
+
+## Next steps
+<!-- Add a context sentence for the following links -->
+- [What's new](cache-whats-new.md)
+- [Azure Cache for Redis FAQ](cache-faq.yml)
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md old mode 100755new mode 100644
- Previously updated : 09/01/2022+ Last updated : 09/29/2022
Last updated 09/01/2022
## September 2022
+### Upgrade your Azure Cache for Redis instances to use Redis version 6 by June 30, 2023
+
+On June 30, 2023, we'll retire version 4 for Azure Cache for Redis instances. Before that date, you need to upgrade any of your cache instances to version 6.
+
+- All cache instances running Redis version 4 after June 30, 2023 will be upgraded automatically.
+- All cache instances running Redis version 4 that have geo-replication enabled will be upgraded automatically after August 30, 2023.
+
+We recommend that you upgrade your caches on your own to accommodate your schedule and the needs of your users to make the upgrade as convenient as possible.
+
+For more information, see [Retirements](cache-retired-features.md).
+ ### Support for managed identity in Azure Cache for Redis Authenticating storage account connections using managed identity has now reached General Availability (GA).
The default version of Redis that is used when creating a cache can change over
As of May 2022, Azure Cache for Redis rolls over to TLS certificates issued by DigiCert Global G2 CA Root. The current Baltimore CyberTrust Root expires in May 2025, requiring this change.
-We expect that most Azure Cache for Redis customers won't be affected. However, your application might be affected if you explicitly specify a list of acceptable certificate authorities (CAs), known as *certificate pinning*.
+We expect that most Azure Cache for Redis customers won't be affected. However, your application might be affected if you explicitly specify a list of acceptable certificate authorities (CAs), known as _certificate pinning_.
For more information, read this blog that contains instructions on [how to check whether your client application is affected](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-cache-for-redis-tls-upcoming-migration-to-digicert-global/ba-p/3171086). We recommend taking the actions recommended in the blog to avoid cache connectivity loss.
For more information, read this blog that contains instructions on [how to check
Active geo-replication for Azure Cache for Redis Enterprise is now generally available (GA).
-Active geo-replication is a powerful tool that enables Azure Cache for Redis clusters to be linked together for seamless active-active replication of data. Your applications can write to one Redis cluster and your data is automatically copied to the other linked clusters, and vice versa. For more information, see this [post](https://aka.ms/ActiveGeoGA) in the *Azure Developer Community Blog*.
+Active geo-replication is a powerful tool that enables Azure Cache for Redis clusters to be linked together for seamless active-active replication of data. Your applications can write to one Redis cluster and your data is automatically copied to the other linked clusters, and vice versa. For more information, see this [post](https://aka.ms/ActiveGeoGA) in the _Azure Developer Community Blog_.
## January 2022
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
There are few exceptions to the retirement policy outlined above. Here is a list
To learn more about specific language version support policy timeline, visit the following external resources: * .NET - [dotnet.microsoft.com](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) * Node - [github.com](https://github.com/nodejs/Release#release-schedule)
-* Java - [azul.com](https://www.azul.com/products/azul-support-roadmap/)
+* Java - [Microsoft technical documentation](/azure/developer/java/fundamentals/java-support-on-azure)
* PowerShell - [Microsoft technical documentation](/powershell/scripting/powershell-support-lifecycle#powershell-end-of-support-dates) * Python - [devguide.python.org](https://devguide.python.org/#status-of-python-branches)
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
To use the SDK, you install a small instrumentation package in your app and then
### [.NET](#tab/net)
-Integrated Auto-instrumentation is available for [Azure App Service .NET](azure-web-apps-net.md), [Azure App Service .NET Core](azure-web-apps-net-core.md), [Azure Functions](../../azure-functions/functions-monitoring.md#monitor-executions-in-azure-functions), and [Azure Virtual Machines](azure-vm-vmss-apps.md).
+Integrated Auto-instrumentation is available for [Azure App Service .NET](azure-web-apps-net.md), [Azure App Service .NET Core](azure-web-apps-net-core.md), [Azure Functions](../../azure-functions/functions-monitoring.md), and [Azure Virtual Machines](azure-vm-vmss-apps.md).
[Azure Monitor Application Insights Agent](status-monitor-v2-overview.md) is available for workloads running in on-premises virtual machines.
A preview [Open Telemetry](opentelemetry-enable.md?tabs=net) offering is also av
### [Java](#tab/java)
-Auto-instrumentation is available for any environment using [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md).
+Integrated Auto-Instrumentation is available for Java Apps hosted on [Azure App Service](azure-web-apps-java.md) and [Azure Functions](monitor-functions.md).
-Integrated Auto-Instrumentation is available for Java Apps hosted on [Azure App Service](azure-web-apps-java.md) and [Azure Functions](monitor-functions.md#distributed-tracing-for-java-applications-public-preview).
+Auto-instrumentation is available for any environment using [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md).
### [Node.js](#tab/nodejs)
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
Title: Data retention and storage in Azure Application Insights | Microsoft Docs
-description: Retention and privacy policy statement
+ Title: Data retention and storage in Application Insights | Microsoft Docs
+description: Retention and privacy policy statement for Application Insights.
Last updated 06/30/2020
# Data collection, retention, and storage in Application Insights
-When you install [Azure Application Insights][start] SDK in your app, it sends telemetry about your app to the Cloud. Naturally, responsible developers want to know exactly what data is sent, what happens to the data, and how they can keep control of it. In particular, could sensitive data be sent, where is it stored, and how secure is it?
+When you install the [Application Insights][start] SDK in your app, it sends telemetry about your app to the cloud. As a responsible developer, you want to know exactly what data is sent, what happens to the data, and how you can keep control of it. In particular, could sensitive data be sent, where is it stored, and how secure is it?
First, the short answer:
-* The standard telemetry modules that run "out of the box" are unlikely to send sensitive data to the service. The telemetry is concerned with load, performance and usage metrics, exception reports, and other diagnostic data. The main user data visible in the diagnostic reports are URLs; but your app shouldn't in any case put sensitive data in plain text in a URL.
+* The standard telemetry modules that run "out of the box" are unlikely to send sensitive data to the service. The telemetry is concerned with load, performance and usage metrics, exception reports, and other diagnostic data. The main user data visible in the diagnostic reports are URLs. But your app shouldn't, in any case, put sensitive data in plain text in a URL.
* You can write code that sends more custom telemetry to help you with diagnostics and monitoring usage. (This extensibility is a great feature of Application Insights.) It would be possible, by mistake, to write this code so that it includes personal and other sensitive data. If your application works with such data, you should apply a thorough review process to all the code you write.
-* While developing and testing your app, it's easy to inspect what's being sent by the SDK. The data appears in the debugging output windows of the IDE and browser.
-* You can select the location when you create a new Application Insights resource. Know more about Application Insights availability per region [here](https://azure.microsoft.com/global-infrastructure/services/?products=all).
-* Review the collected data, as this collection may include data that is allowed in some circumstances but not others. A good example of this circumstance is Device Name. The device name from a server does not affect privacy and is useful, but a device name from a phone or laptop may have privacy implications and be less useful. An SDK developed primarily to target servers, would collect device name by default, and this may need to be overwritten in both normal events and exceptions.
+* While you develop and test your app, it's easy to inspect what's being sent by the SDK. The data appears in the debugging output windows of the IDE and browser.
+* You can select the location when you create a new Application Insights resource. For more information about Application Insights availability per region, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all).
+* Review the collected data because it might include data that's allowed in some circumstances but not others. A good example of this circumstance is device name. The device name from a server doesn't affect privacy and is useful. A device name from a phone or laptop might have privacy implications and be less useful. An SDK developed primarily to target servers would collect device name by default. This capability might need to be overwritten in both normal events and exceptions.
-The rest of this article elaborates more fully on these answers. It's designed to be self-contained, so that you can show it to colleagues who aren't part of your immediate team.
+The rest of this article discusses these points more fully. The article is self-contained, so you can share it with colleagues who aren't part of your immediate team.
## What is Application Insights?
-[Azure Application Insights][start] is a service provided by Microsoft that helps you improve the performance and usability of your live application. It monitors your application all the time it's running, both during testing and after you've published or deployed it. Application Insights creates charts and tables that show you, for example, what times of day you get most users, how responsive the app is, and how well it's served by any external services that it depends on. If there are crashes, failures or performance issues, you can search through the telemetry data in detail to diagnose the cause. And the service will send you emails if there are any changes in the availability and performance of your app.
-In order to get this functionality, you install an Application Insights SDK in your application, which becomes part of its code. When your app is running, the SDK monitors its operation and sends telemetry to the Application Insights service. This is a cloud service hosted by [Microsoft Azure](https://azure.com). (But Application Insights works for any applications, not just applications that are hosted in Azure.)
+[Application Insights][start] is a service provided by Microsoft that helps you improve the performance and usability of your live application. It monitors your application all the time it's running, both during testing and after you've published or deployed it. Application Insights creates charts and tables that show you informative metrics. For example, you might see what times of day you get most users, how responsive the app is, and how well it's served by any external services that it depends on. If there are failures or performance issues, you can search through the telemetry data to diagnose the cause. The service sends you emails if there are any changes in the availability and performance of your app.
-The Application Insights service stores and analyzes the telemetry. To see the analysis or search through the stored telemetry, you sign in to your Azure account and open the Application Insights resource for your application. You can also share access to the data with other members of your team, or with specified Azure subscribers.
+To get this functionality, you install an Application Insights SDK in your application, which becomes part of its code. When your app is running, the SDK monitors its operation and sends telemetry to Application Insights, which is a cloud service hosted by [Microsoft Azure](https://azure.com). Application Insights also works for any applications, not just applications that are hosted in Azure.
-You can have data exported from the Application Insights service, for example to a database or to external tools. You provide each tool with a special key that you obtain from the service. The key can be revoked if necessary.
+Application Insights stores and analyzes the telemetry. To see the analysis or search through the stored telemetry, you sign in to your Azure account and open the Application Insights resource for your application. You can also share access to the data with other members of your team, or with specified Azure subscribers.
-Application Insights SDKs are available for a range of application types: web services hosted in your own Java EE or ASP.NET servers, or in Azure; web clients - that is, the code running in a web page; desktop apps and services; device apps such as Windows Phone, iOS, and Android. They all send telemetry to the same service.
+You can have data exported from Application Insights, for example, to a database or to external tools. You provide each tool with a special key that you obtain from the service. The key can be revoked if necessary.
+
+Application Insights SDKs are available for a range of application types:
+
+- Web services hosted in your own Java EE or ASP.NET servers, or in Azure
+- Web clients, that is, the code running in a webpage
+- Desktop apps and services
+- Device apps such as Windows Phone, iOS, and Android
+
+They all send telemetry to the same service.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## What data does it collect?+ There are three sources of data:
-* The SDK, which you integrate with your app either [in development](./asp-net.md) or [at run time](./status-monitor-v2-overview.md). There are different SDKs for different application types. There's also an [SDK for web pages](./javascript.md), which loads into the end user's browser along with the page.
+* The SDK, which you integrate with your app either [in development](./asp-net.md) or [at runtime](./status-monitor-v2-overview.md). There are different SDKs for different application types. There's also an [SDK for webpages](./javascript.md), which loads into the user's browser along with the page.
* Each SDK has many [modules](./configuration-with-applicationinsights-config.md), which use different techniques to collect different types of telemetry. * If you install the SDK in development, you can use its API to send your own telemetry, in addition to the standard modules. This custom telemetry can include any data you want to send. * In some web servers, there are also agents that run alongside the app and send telemetry about CPU, memory, and network occupancy. For example, Azure VMs, Docker hosts, and [Java application servers](./java-in-process-agent.md) can have such agents.
-* [Availability tests](./monitor-web-app-availability.md) are processes run by Microsoft that send requests to your web app at regular intervals. The results are sent to the Application Insights service.
+* [Availability tests](./monitor-web-app-availability.md) are processes run by Microsoft that send requests to your web app at regular intervals. The results are sent to Application Insights.
+
+### What kind of data is collected?
-### What kinds of data are collected?
The main categories are:
-* [Web server telemetry](./asp-net.md) - HTTP requests. Uri, time taken to process the request, response code, client IP address. `Session id`.
-* [Web pages](./javascript.md) - Page, user and session counts. Page load times. Exceptions. Ajax calls.
-* Performance counters - Memory, CPU, IO, Network occupancy.
-* Client and server context - OS, locale, device type, browser, screen resolution.
-* [Exceptions](./asp-net-exceptions.md) and crashes - **stack dumps**, `build id`, CPU type.
-* [Dependencies](./asp-net-dependencies.md) - calls to external services such as REST, SQL, AJAX. URI or connection string, duration, success, command.
-* [Availability tests](./monitor-web-app-availability.md) - duration of test and steps, responses.
-* [Trace logs](./asp-net-trace-logs.md) and [custom telemetry](./api-custom-events-metrics.md) - **anything you code into your logs or telemetry**.
+* [Web server telemetry](./asp-net.md): HTTP requests. URI, time taken to process the request, response code, and client IP address. `Session id`.
+* [Webpages](./javascript.md): Page, user, and session counts. Page load times. Exceptions. Ajax calls.
+* Performance counters: Memory, CPU, IO, and network occupancy.
+* Client and server context: OS, locale, device type, browser, and screen resolution.
+* [Exceptions](./asp-net-exceptions.md) and crashes: Stack dumps, `build id`, and CPU type.
+* [Dependencies](./asp-net-dependencies.md): Calls to external services such as REST, SQL, and AJAX. URI or connection string, duration, success, and command.
+* [Availability tests](./monitor-web-app-availability.md): Duration of test and steps, and responses.
+* [Trace logs](./asp-net-trace-logs.md) and [custom telemetry](./api-custom-events-metrics.md): Anything you code into your logs or telemetry.
-[More detail](#data-sent-by-application-insights).
+For more information, see the section [Data sent by Application Insights](#data-sent-by-application-insights).
## How can I verify what's being collected?
-If you're developing the app using Visual Studio, run the app in debug mode (F5). The telemetry appears in the Output window. From there, you can copy it and format it as JSON for easy inspection.
+
+If you're developing an app using Visual Studio, run the app in debug mode (F5). The telemetry appears in the **Output** window. From there, you can copy it and format it as JSON for easy inspection.
![Screenshot that shows running the app in debug mode in Visual Studio.](./media/data-retention-privacy/06-vs.png)
-There's also a more readable view in the Diagnostics window.
+There's also a more readable view in the **Diagnostics** window.
-For web pages, open your browser's debugging window.
+For webpages, open your browser's debugging window. Select F12 and open the **Network** tab.
-![Press F12 and open the Network tab.](./media/data-retention-privacy/08-browser.png)
+![Screenshot that shows the open Network tab.](./media/data-retention-privacy/08-browser.png)
-### Can I write code to filter the telemetry before it is sent?
-This would be possible by writing a [telemetry processor plugin](./api-filtering-sampling.md).
+### Can I write code to filter the telemetry before it's sent?
+
+You'll need to write a [telemetry processor plug-in](./api-filtering-sampling.md).
## How long is the data kept?
-Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table) of 30, 60, 90, 120, 180, 270, 365, 550 or 730 days. If you need to keep data longer than 730 days, you can use [Continuous Export](./export-telemetry.md) to copy it to a storage account during data ingestion.
-Data kept longer than 90 days will incur addition charges. Learn more about Application Insights pricing on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table) of 30, 60, 90, 120, 180, 270, 365, 550, or 730 days. If you need to keep data longer than 730 days, you can use [Continuous Export](./export-telemetry.md) to copy it to a storage account during data ingestion.
+
+Data kept longer than 90 days incurs extra charges. For more information about Application Insights pricing, see the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
-Aggregated data (that is, counts, averages and other statistical data that you see in Metric Explorer) are retained at a grain of 1 minute for 90 days.
+Aggregated data (that is, counts, averages, and other statistical data that you see in metric explorer) are retained at a grain of 1 minute for 90 days.
[Debug snapshots](./snapshot-debugger.md) are stored for 15 days. This retention policy is set on a per-application basis. If you need to increase this value, you can request an increase by opening a support case in the Azure portal. ## Who can access the data?
-The data is visible to you and, if you have an organization account, your team members.
+
+The data is visible to you and, if you have an organization account, your team members.
It can be exported by you and your team members and could be copied to other locations and passed on to other people. #### What does Microsoft do with the information my app sends to Application Insights?
-Microsoft uses the data only in order to provide the service to you.
+
+Microsoft uses the data only to provide the service to you.
## Where is the data held?
-* You can select the location when you create a new Application Insights resource. Know more about Application Insights availability per region [here](https://azure.microsoft.com/global-infrastructure/services/?products=all).
+
+You can select the location when you create a new Application Insights resource. For more information about Application Insights availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all).
## How secure is my data?
-Application Insights is an Azure Service. Security policies are described in the [Azure Security, Privacy, and Compliance white paper](https://go.microsoft.com/fwlink/?linkid=392408).
+
+Application Insights is an Azure service. Security policies are described in the [Azure Security, Privacy, and Compliance white paper](https://go.microsoft.com/fwlink/?linkid=392408).
The data is stored in Microsoft Azure servers. For accounts in the Azure portal, account restrictions are described in the [Azure Security, Privacy, and Compliance document](https://go.microsoft.com/fwlink/?linkid=392408).
-Access to your data by Microsoft personnel is restricted. We access your data only with your permission and if it is necessary to support your use of Application Insights.
+Access to your data by Microsoft personnel is restricted. We access your data only with your permission and if it's necessary to support your use of Application Insights.
-Data in aggregate across all our customers' applications (such as data rates and average size of traces) is used to improve Application Insights.
+Data in aggregate across all our customers' applications, such as data rates and average size of traces, is used to improve Application Insights.
#### Could someone else's telemetry interfere with my Application Insights data?
-They could send additional telemetry to your account by using the instrumentation key, which can be found in the code of your web pages. With enough additional data, your metrics would not correctly represent your app's performance and usage.
+
+Someone could send more telemetry to your account by using the instrumentation key. This key can be found in the code of your webpages. With enough extra data, your metrics wouldn't correctly represent your app's performance and usage.
If you share code with other projects, remember to remove your instrumentation key. ## Is the data encrypted?
-All data is encrypted at rest and as it moves between data centers.
-#### Is the data encrypted in transit from my application to Application Insights servers?
-Yes, we use https to send data to the portal from nearly all SDKs, including web servers, devices, and HTTPS web pages.
+All data is encrypted at rest and as it moves between datacenters.
-## Does the SDK create temporary local storage?
+#### Is the data encrypted in transit from my application to Application Insights servers?
-Yes, certain Telemetry Channels will persist data locally if an endpoint cannot be reached. Please review below to see which frameworks and telemetry channels are affected.
+Yes. We use HTTPS to send data to the portal from nearly all SDKs, including web servers, devices, and HTTPS webpages.
-Telemetry channels that utilize local storage create temp files in the TEMP or APPDATA directories, which are restricted to the specific account running your application. This may happen when an endpoint was temporarily unavailable or you hit the throttling limit. Once this issue is resolved, the telemetry channel will resume sending all the new and persisted data.
+## Does the SDK create temporary local storage?
-This persisted data is not encrypted locally. If this is a concern, review the data and restrict the collection of private data. (For more information, see [How to export and delete private data](../logs/personal-data-mgmt.md#exporting-and-deleting-personal-data).)
+Yes. Certain telemetry channels will persist data locally if an endpoint can't be reached. The following paragraphs describe which frameworks and telemetry channels are affected:
-If a customer needs to configure this directory with specific security requirements, it can be configured per framework. Please make sure that the process running your application has write access to this directory, but also make sure this directory is protected to avoid telemetry being read by unintended users.
+- Telemetry channels that utilize local storage create temp files in the TEMP or APPDATA directories, which are restricted to the specific account running your application. This situation might happen when an endpoint was temporarily unavailable or if you hit the throttling limit. After this issue is resolved, the telemetry channel will resume sending all the new and persisted data.
+- This persisted data isn't encrypted locally. If this issue is a concern, review the data and restrict the collection of private data. For more information, see [Export and delete private data](../logs/personal-data-mgmt.md#exporting-and-deleting-personal-data).
+- If a customer needs to configure this directory with specific security requirements, it can be configured per framework. Make sure that the process running your application has write access to this directory. Also make sure this directory is protected to avoid telemetry being read by unintended users.
### Java
-`C:\Users\username\AppData\Local\Temp` is used for persisting data. This location isn't configurable from the config directory and the permissions to access this folder are restricted to the specific user with required credentials. (For more information, see [implementation](https://github.com/Microsoft/ApplicationInsights-Java/blob/40809cb6857231e572309a5901e1227305c27c1a/core/src/main/java/com/microsoft/applicationinsights/internal/util/LocalFileSystemUtils.java#L48-L72).)
-
-### .NET
+The folder `C:\Users\username\AppData\Local\Temp` is used for persisting data. This location isn't configurable from the config directory, and the permissions to access this folder are restricted to the specific user with required credentials. For more information, see [implementation](https://github.com/Microsoft/ApplicationInsights-Java/blob/40809cb6857231e572309a5901e1227305c27c1a/core/src/main/java/com/microsoft/applicationinsights/internal/util/LocalFileSystemUtils.java#L48-L72).
-By default `ServerTelemetryChannel` uses the current userΓÇÖs local app data folder `%localAppData%\Microsoft\ApplicationInsights` or temp folder `%TMP%`. (See [implementation](https://github.com/Microsoft/ApplicationInsights-dotnet/blob/91e9c91fcea979b1eec4e31ba8e0fc683bf86802/src/ServerTelemetryChannel/Implementation/ApplicationFolderProvider.cs#L54-L84) here.)
+### .NET
+By default, `ServerTelemetryChannel` uses the current user's local app data folder `%localAppData%\Microsoft\ApplicationInsights` or temp folder `%TMP%`. For more information, see [implementation](https://github.com/Microsoft/ApplicationInsights-dotnet/blob/91e9c91fcea979b1eec4e31ba8e0fc683bf86802/src/ServerTelemetryChannel/Implementation/ApplicationFolderProvider.cs#L54-L84).
Via configuration file: ```xml
Via configuration file:
Via code: -- Remove ServerTelemetryChannel from configuration file
+- Remove `ServerTelemetryChannel` from the configuration file.
- Add this snippet to your configuration:+ ```csharp ServerTelemetryChannel channel = new ServerTelemetryChannel(); channel.StorageFolder = @"D:\NewTestFolder";
Via code:
### NetCore
-By default `ServerTelemetryChannel` uses the current userΓÇÖs local app data folder `%localAppData%\Microsoft\ApplicationInsights` or temp folder `%TMP%`. (See [implementation](https://github.com/Microsoft/ApplicationInsights-dotnet/blob/91e9c91fcea979b1eec4e31ba8e0fc683bf86802/src/ServerTelemetryChannel/Implementation/ApplicationFolderProvider.cs#L54-L84) here.)
+By default, `ServerTelemetryChannel` uses the current user's local app data folder `%localAppData%\Microsoft\ApplicationInsights` or temp folder `%TMP%`. For more information, see [implementation](https://github.com/Microsoft/ApplicationInsights-dotnet/blob/91e9c91fcea979b1eec4e31ba8e0fc683bf86802/src/ServerTelemetryChannel/Implementation/ApplicationFolderProvider.cs#L54-L84).
In a Linux environment, local storage will be disabled unless a storage folder is specified. > [!NOTE]
-> With the release 2.15.0-beta3 and greater local storage is now automatically created for Linux, Mac, and Windows. For non Windows systems the SDK will automatically create a local storage folder based on the following logic:
-> - `${TMPDIR}` - if `${TMPDIR}` environment variable is set this location is used.
-> - `/var/tmp` - if the previous location does not exist we try `/var/tmp`.
-> - `/tmp` - if both the previous locations do not exist we try `tmp`.
-> - If none of those locations exist local storage is not created and manual configuration is still required. [For full implementation details](https://github.com/microsoft/ApplicationInsights-dotnet/pull/1860).
+> With the release 2.15.0-beta3 and greater, local storage is now automatically created for Linux, Mac, and Windows. For non-Windows systems, the SDK will automatically create a local storage folder based on the following logic:
+>
+> - `${TMPDIR}`: If `${TMPDIR}` environment variable is set, this location is used.
+> - `/var/tmp`: If the previous location doesn't exist, we try `/var/tmp`.
+> - `/tmp`: If both the previous locations don't exist, we try `tmp`.
+> - If none of those locations exist, local storage isn't created and manual configuration is still required.
+>
+> For full implementation details, see [ServerTelemetryChannel stores telemetry data in default folder during transient errors in non-Windows environments](https://github.com/microsoft/ApplicationInsights-dotnet/pull/1860).
The following code snippet shows how to set `ServerTelemetryChannel.StorageFolder` in the `ConfigureServices()` method of your `Startup.cs` class:
The following code snippet shows how to set `ServerTelemetryChannel.StorageFolde
services.AddSingleton(typeof(ITelemetryChannel), new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"}); ```
-(For more information, see [AspNetCore Custom Configuration](https://github.com/Microsoft/ApplicationInsights-aspnetcore/wiki/Custom-Configuration).)
+For more information, see [AspNetCore custom configuration](https://github.com/Microsoft/ApplicationInsights-aspnetcore/wiki/Custom-Configuration).
### Node.js
-By default `%TEMP%/appInsights-node{INSTRUMENTATION KEY}` is used for persisting data. Permissions to access this folder are restricted to the current user and Administrators. (See [implementation](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Sender.ts) here.)
+By default, `%TEMP%/appInsights-node{INSTRUMENTATION KEY}` is used for persisting data. Permissions to access this folder are restricted to the current user and administrators. For more information, see the [implementation](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Sender.ts).
The folder prefix `appInsights-node` can be overridden by changing the runtime value of the static variable `Sender.TEMPDIR_PREFIX` found in [Sender.ts](https://github.com/Microsoft/ApplicationInsights-node.js/blob/7a1ecb91da5ea0febf5ceab13d6a4bf01a63933d/Library/Sender.ts#L384). ### JavaScript (browser)
-[HTML5 Session Storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/sessionStorage) is used to persist data. Two separate buffers are used: `AI_buffer` and `AI_sent_buffer`. Telemetry that is batched and waiting to be sent is stored in `AI_buffer`. Telemetry that was just sent is placed in `AI_sent_buffer` until the ingestion server responds that it was successfully received. When telemetry is successfully received, it's removed from all buffers. On transient failures (for example, a user loses network connectivity), telemetry remains in `AI_buffer` until it is successfully received or the ingestion server responds that the telemetry is invalid (bad schema or too old, for example).
+[HTML5 Session Storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/sessionStorage) is used to persist data. Two separate buffers are used: `AI_buffer` and `AI_sent_buffer`. Telemetry that's batched and waiting to be sent is stored in `AI_buffer`. Telemetry that was just sent is placed in `AI_sent_buffer` until the ingestion server responds that it was successfully received.
+
+When telemetry is successfully received, it's removed from all buffers. On transient failures (for example, a user loses network connectivity), telemetry remains in `AI_buffer` until it's successfully received or the ingestion server responds that the telemetry is invalid (bad schema or too old, for example).
Telemetry buffers can be disabled by setting [`enableSessionStorageBuffer`](https://github.com/microsoft/ApplicationInsights-JS/blob/17ef50442f73fd02a758fbd74134933d92607ecf/legacy/JavaScript/JavaScriptSDK.Interfaces/IConfig.ts#L31) to `false`. When session storage is turned off, a local array is instead used as persistent storage. Because the JavaScript SDK runs on a client device, the user has access to this storage location via their browser's developer tools. ### OpenCensus Python
-By default OpenCensus Python SDK uses the current user folder `%username%/.opencensus/.azure/`. Permissions to access this folder are restricted to the current user and Administrators. (See [implementation](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/opencensus/ext/azure/common/storage.py) here.) The folder with your persisted data will be named after the Python file that generated the telemetry.
+By default, OpenCensus Python SDK uses the current user folder `%username%/.opencensus/.azure/`. Permissions to access this folder are restricted to the current user and administrators. For more information, see the [implementation](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/opencensus/ext/azure/common/storage.py). The folder with your persisted data will be named after the Python file that generated the telemetry.
-You may change the location of your storage file by passing in the `storage_path` parameter in the constructor of the exporter you are using.
+You can change the location of your storage file by passing in the `storage_path` parameter in the constructor of the exporter you're using.
```python AzureLogHandler(
AzureLogHandler(
## How do I send data to Application Insights using TLS 1.2?
-To ensure the security of data in transit to the Application Insights endpoints, we strongly encourage customers to configure their application to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols.
+To ensure the security of data in transit to the Application Insights endpoints, we strongly encourage customers to configure their application to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable. Although they still currently work to allow backward compatibility, they *aren't recommended*. The industry is quickly moving to abandon support for these older protocols.
-The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a [deadline of June 30, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your application/clients cannot communicate over at least TLS 1.2 you would not be able to send data to Application Insights. The approach you take to test and validate your application's TLS support will vary depending on the operating system/platform as well as the language/framework your application uses.
+The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a [deadline of June 30, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. After Azure drops legacy support, if your application or clients can't communicate over at least TLS 1.2, you wouldn't be able to send data to Application Insights. The approach you take to test and validate your application's TLS support will vary depending on the operating system or platform and the language or framework your application uses.
-We do not recommend explicitly setting your application to only use TLS 1.2 unless necessary as this can break platform level security features that allow you to automatically detect and take advantage of newer more secure protocols as they become available such as TLS 1.3. We recommend performing a thorough audit of your application's code to check for hardcoding of specific TLS/SSL versions.
+We do not recommend explicitly setting your application to only use TLS 1.2, unless necessary. This setting can break platform-level security features that allow you to automatically detect and take advantage of newer more secure protocols as they become available, such as TLS 1.3. We recommend that you perform a thorough audit of your application's code to check for hardcoding of specific TLS/SSL versions.
-### Platform/Language specific guidance
+### Platform/Language-specific guidance
-|Platform/Language | Support | More Information |
+|Platform/Language | Support | More information |
| | | |
-| Azure App Services | Supported, configuration may be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). |
-| Azure Function Apps | Supported, configuration may be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). |
-|.NET | Supported, Long Term Support (LTS) | For detailed configuration information, refer to [these instructions](/dotnet/framework/network-programming/tls). |
-|Status Monitor | Supported, configuration required | Status Monitor relies on [OS Configuration](/windows-server/security/tls/tls-registry-settings) + [.NET Configuration](/dotnet/framework/network-programming/tls#support-for-tls-12) to support TLS 1.2.
-|Node.js | Supported, in v10.5.0, configuration may be required. | Use the [official Node.js TLS/SSL documentation](https://nodejs.org/api/tls.html) for any application-specific configuration. |
+| Azure App Services | Supported, configuration might be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). |
+| Azure Function Apps | Supported, configuration might be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). |
+|.NET | Supported, Long Term Support (LTS). | For detailed configuration information, refer to [these instructions](/dotnet/framework/network-programming/tls). |
+|Status Monitor | Supported, configuration required. | Status Monitor relies on [OS Configuration](/windows-server/security/tls/tls-registry-settings) + [.NET Configuration](/dotnet/framework/network-programming/tls#support-for-tls-12) to support TLS 1.2.
+|Node.js | Supported, in v10.5.0, configuration might be required. | Use the [official Node.js TLS/SSL documentation](https://nodejs.org/api/tls.html) for any application-specific configuration. |
|Java | Supported, JDK support for TLS 1.2 was added in [JDK 6 update 121](https://www.oracle.com/technetwork/java/javase/overview-156328.html#R160_121) and [JDK 7](https://www.oracle.com/technetwork/java/javase/7u131-relnotes-3338543.html). | JDK 8 uses [TLS 1.2 by default](https://blogs.oracle.com/java-platform-group/jdk-8-will-use-tls-12-as-default). | |Linux | Linux distributions tend to rely on [OpenSSL](https://www.openssl.org) for TLS 1.2 support. | Check the [OpenSSL Changelog](https://www.openssl.org/news/changelog.html) to confirm your version of OpenSSL is supported.|
-| Windows 8.0 - 10 | Supported, and enabled by default. | To confirm that you are still using the [default settings](/windows-server/security/tls/tls-registry-settings). |
-| Windows Server 2012 - 2016 | Supported, and enabled by default. | To confirm that you are still using the [default settings](/windows-server/security/tls/tls-registry-settings) |
+| Windows 8.0 - 10 | Supported, and enabled by default. | To confirm that you're still using the [default settings](/windows-server/security/tls/tls-registry-settings). |
+| Windows Server 2012 - 2016 | Supported, and enabled by default. | To confirm that you're still using the [default settings](/windows-server/security/tls/tls-registry-settings). |
| Windows 7 SP1 and Windows Server 2008 R2 SP1 | Supported, but not enabled by default. | See the [Transport Layer Security (TLS) registry settings](/windows-server/security/tls/tls-registry-settings) page for details on how to enable. | | Windows Server 2008 SP2 | Support for TLS 1.2 requires an update. | See [Update to add support for TLS 1.2](https://support.microsoft.com/help/4019276/update-to-add-support-for-tls-1-1-and-tls-1-2-in-windows-server-2008-s) in Windows Server 2008 SP2. |
-|Windows Vista | Not Supported. | N/A
+|Windows Vista | Not supported. | N/A
### Check what version of OpenSSL your Linux distribution is running
openssl version -a
### Run a test TLS 1.2 transaction on Linux
-To run a preliminary test to see if your Linux system can communicate over TLS 1.2., open the terminal and run:
+To run a preliminary test to see if your Linux system can communicate over TLS 1.2, open the terminal and run:
```terminal openssl s_client -connect bing.com:443 -tls1_2
openssl s_client -connect bing.com:443 -tls1_2
## Personal data stored in Application Insights
-Our [Application Insights personal data article](../logs/personal-data-mgmt.md) discusses this issue in-depth.
+For an in-depth discussion on this issue, see [Managing personal data in Log Analytics and Application Insights](../logs/personal-data-mgmt.md).
#### Can my users turn off Application Insights?+ Not directly. We don't provide a switch that your users can operate to turn off Application Insights.
-However, you can implement such a feature in your application. All the SDKs include an API setting that turns off telemetry collection.
+You can implement such a feature in your application. All the SDKs include an API setting that turns off telemetry collection.
## Data sent by Application Insights
-The SDKs vary between platforms, and there are several components that you can install. (Refer to [Application Insights - overview][start].) Each component sends different data.
+
+The SDKs vary between platforms, and there are several components that you can install. For more information, see [Application Insights overview][start]. Each component sends different data.
#### Classes of data sent in different scenarios
The SDKs vary between platforms, and there are several components that you can i
| [Add Application Insights SDK to a .NET web project][greenbrown] |ServerContext<br/>Inferred<br/>Perf counters<br/>Requests<br/>**Exceptions**<br/>Session<br/>users | | [Install Status Monitor on IIS][redfield] |Dependencies<br/>ServerContext<br/>Inferred<br/>Perf counters | | [Add Application Insights SDK to a Java web app][java] |ServerContext<br/>Inferred<br/>Request<br/>Session<br/>users |
-| [Add JavaScript SDK to web page][client] |ClientContext <br/>Inferred<br/>Page<br/>ClientPerf<br/>Ajax |
+| [Add JavaScript SDK to webpage][client] |ClientContext <br/>Inferred<br/>Page<br/>ClientPerf<br/>Ajax |
| [Define default properties][apiproperties] |**Properties** on all standard and custom events | | [Call TrackMetric][api] |Numeric values<br/>**Properties** | | [Call Track*][api] |Event name<br/>**Properties** | | [Call TrackException][api] |**Exceptions**<br/>Stack dump<br/>**Properties** |
-| SDK can't collect data. For example: <br/> - can't access perf counters<br/> - exception in telemetry initializer |SDK diagnostics |
+| SDK can't collect data. For example: <br/> - Can't access perf counters<br/> - Exception in telemetry initializer |SDK diagnostics |
For [SDKs for other platforms][platforms], see their documents.
For [SDKs for other platforms][platforms], see their documents.
| ClientContext |OS, locale, language, network, window resolution | | Session |`session id` | | ServerContext |Machine name, locale, OS, device, user session, user context, operation |
-| Inferred |geo location from IP address, timestamp, OS, browser |
+| Inferred |Geolocation from IP address, timestamp, OS, browser |
| Metrics |Metric name and value | | Events |Event name and value | | PageViews |URL and page name or screen name | | Client perf |URL/page name, browser load time |
-| Ajax |HTTP calls from web page to server |
+| Ajax |HTTP calls from webpage to server |
| Requests |URL, duration, response code |
-| Dependencies |Type(SQL, HTTP, ...), connection string, or URI, sync/async, duration, success, SQL statement (with Status Monitor) |
-| **Exceptions** |Type, **message**, call stacks, source file, line number, `thread id` |
-| Crashes |`Process id`, `parent process id`, `crash thread id`; application patch, `id`, build; exception type, address, reason; obfuscated symbols and registers, binary start and end addresses, binary name and path, cpu type |
-| Trace |**Message** and severity level |
+| Dependencies |Type (SQL, HTTP, ...), connection string, or URI, sync/async, duration, success, SQL statement (with Status Monitor) |
+| Exceptions |Type, message, call stacks, source file, line number, `thread id` |
+| Crashes |`Process id`, `parent process id`, `crash thread id`; application patch, `id`, build; exception type, address, reason; obfuscated symbols and registers, binary start and end addresses, binary name and path, cpu type |
+| Trace |Message and severity level |
| Perf counters |Processor time, available memory, request rate, exception rate, process private bytes, IO rate, request duration, request queue length | | Availability |Web test response code, duration of each test step, test name, timestamp, success, response time, test location |
-| SDK diagnostics |Trace message or Exception |
+| SDK diagnostics |Trace message or exception |
-You can [switch off some of the data by editing ApplicationInsights.config][config]
+You can [switch off some of the data by editing ApplicationInsights.config][config].
> [!NOTE]
-> Client IP is used to infer geographic location, but by default IP data is no longer stored and all zeroes are written to the associated field. To understand more about personal data handling we recommend this [article](../logs/personal-data-mgmt.md#application-data). If you need to store IP address data our [IP address collection article](./ip-collection.md) will walk you through your options.
+> Client IP is used to infer geographic location, but by default IP data is no longer stored and all zeroes are written to the associated field. To understand more about personal data handling, see [Managing personal data in Log Analytics and Application Insights](../logs/personal-data-mgmt.md#application-data). If you need to store IP address data, [geolocation and IP address handling](./ip-collection.md) will walk you through your options.
## Can I modify or update data after it has been collected?
-No, data is read-only, and can only be deleted via the purge functionality. To learn more visit [Guidance for personal data stored in Log Analytics and Application Insights](../logs/personal-data-mgmt.md#delete).
+No. Data is read-only and can only be deleted via the purge functionality. To learn more, see [Guidance for personal data stored in Log Analytics and Application Insights](../logs/personal-data-mgmt.md#delete).
## Credits
-This product includes GeoLite2 data created by MaxMind, available from [https://www.maxmind.com](https://www.maxmind.com).
-
+This product includes GeoLite2 data created by [MaxMind](https://www.maxmind.com).
<!--Link references-->
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Title: Migrate from Application Insights instrumentation keys to connection strings
-description: Learn the steps required to upgrade from Azure Monitor Application Insights instrumentation keys to connection strings
+description: Learn the steps required to upgrade from Azure Monitor Application Insights instrumentation keys to connection strings.
Last updated 02/14/2022
# Migrate from Application Insights instrumentation keys to connection strings
-This guide walks through migrating from [instrumentation keys](separate-resources.md#about-resources-and-instrumentation-keys) to [connection strings](sdk-connection-string.md#overview).
+This article walks you through migrating from [instrumentation keys](separate-resources.md#about-resources-and-instrumentation-keys) to [connection strings](sdk-connection-string.md#overview).
## Prerequisites - A [supported SDK version](#supported-sdk-versions)-- An existing [application insights resource](create-workspace-resource.md)
+- An existing [Application Insights resource](create-workspace-resource.md)
## Migration
-1. Go to the Overview blade of your Application Insights resource.
+1. Go to the **Overview** pane of your Application Insights resource.
-1. Find your connection string displayed on the right.
+1. Find your **Connection String** displayed on the right.
-1. Hover over the connection string and select the ΓÇ£Copy to clipboardΓÇ¥ icon.
+1. Hover over the connection string and select the **Copy to clipboard** icon.
1. Configure the Application Insights SDK by following [How to set connection strings](sdk-connection-string.md#set-a-connection-string).
This guide walks through migrating from [instrumentation keys](separate-resource
Use environment variables to pass a connection string to the Application Insights SDK or agent.
-To set a connection string via environment variable, place the value of the connection string into an environment variable named ΓÇ£APPLICATIONINSIGHTS_CONNECTION_STRINGΓÇ¥.
+To set a connection string via an environment variable, place the value of the connection string into an environment variable named `APPLICATIONINSIGHTS_CONNECTION_STRING`.
-This process can be [automated in your Azure deployments](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-with-arm-templates-and-azure-portal). For example, the following ARM template shows how you can automatically include the correct connection string with an App Services deployment (be sure to include any other App Settings your app requires):
+This process can be [automated in your Azure deployments](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-with-arm-templates-and-azure-portal). For example, the following Azure Resource Manager template shows how you can automatically include the correct connection string with an Azure App Service deployment. Be sure to include any other app settings your app requires:
```JSON {
This process can be [automated in your Azure deployments](../../azure-resource-m
} ```
-## New capabilities
-
-Connection strings provide a single configuration setting and eliminate the need for multiple proxy settings.
-- **Reliability:** Connection strings make telemetry ingestion more reliable by removing dependencies on global ingestion endpoints.--- **Security:** Connection strings allow authenticated telemetry ingestion by using [Azure AD authentication for Application Insights](azure-ad-authentication.md).
+## New capabilities
-- **Customized endpoints (sovereign or hybrid cloud environments):** Endpoint settings allow sending data to a specific [Azure Government region](custom-endpoints.md#regions-that-require-endpoint-modification). ([see examples](sdk-connection-string.md#set-a-connection-string))
+Connection strings provide a single configuration setting and eliminate the need for multiple proxy settings.
-- **Privacy (regional endpoints)** ΓÇô Connection strings ease privacy concerns by sending data to regional endpoints, ensuring data doesn't leave a geographic region.
+- **Reliability**: Connection strings make telemetry ingestion more reliable by removing dependencies on global ingestion endpoints.
+- **Security**: Connection strings allow authenticated telemetry ingestion by using [Azure Active Directory (Azure AD) authentication for Application Insights](azure-ad-authentication.md).
+- **Customized endpoints (sovereign or hybrid cloud environments)**: Endpoint settings allow sending data to a specific [Azure Government region](custom-endpoints.md#regions-that-require-endpoint-modification). ([See examples](sdk-connection-string.md#set-a-connection-string).)
+- **Privacy (regional endpoints)**: Connection strings ease privacy concerns by sending data to regional endpoints, ensuring data doesn't leave a geographic region.
-## Supported SDK Versions
+## Supported SDK versions
- .NET and .NET Core v2.12.0+ - Java v2.5.1 and Java 3.0+ - JavaScript v2.3.0+ - NodeJS v1.5.0+ - Python v1.0.0++ ## Troubleshooting+
+This section provides troubleshooting solutions.
+ ### Alert: "Transition to using connection strings for data ingestion" Follow the [migration steps](#migration) in this article to resolve this alert.+ ### Missing data - Confirm you're using a [supported SDK version](#supported-sdk-versions). If you use Application Insights integration in another Azure product offering, check its documentation on how to properly configure a connection string.- - Confirm you aren't setting both an instrumentation key and connection string at the same time. Instrumentation key settings should be removed from your configuration.- - Confirm your connection string is exactly as provided in the Azure portal. ### Environment variables aren't working
- If you hardcode an instrumentation key in your application code, that programming may take precedence before environment variables.
+ If you hardcode an instrumentation key in your application code, that programming might take precedence before environment variables.
## FAQ
+This section provides answers to common questions.
+ ### Where else can I find my connection string?
-The connection string is also included in the ARM resource properties for your Application Insights resource, under the field name ΓÇ£ConnectionStringΓÇ¥.
-### How does this affect auto instrumentation?
-Auto instrumentation scenarios aren't impacted.
+The connection string is also included in the Resource Manager resource properties for your Application Insights resource, under the field name `ConnectionString`.
+
+### How does this affect auto-instrumentation?
+
+Auto-instrumentation scenarios aren't affected.
-### Can I use Azure AD authentication with auto instrumentation?
+### Can I use Azure AD authentication with auto-instrumentation?
-You can't enable [Azure AD authentication](azure-ad-authentication.md) for [auto instrumentation](codeless-overview.md) scenarios. We have plans to address this limitation in the future.
+You can't enable [Azure AD authentication](azure-ad-authentication.md) for [auto-instrumentation](codeless-overview.md) scenarios. We have plans to address this limitation in the future.
-### What is the difference between global and regional ingestion?
+### What's the difference between global and regional ingestion?
-Global ingestion sends all telemetry data to a single endpoint, no matter where this data will be stored. Regional ingestion allows you to define specific endpoints per region for data ingestion, ensuring data stays within a specific region during processing and storage.
+Global ingestion sends all telemetry data to a single endpoint, no matter where this data will be stored. Regional ingestion allows you to define specific endpoints per region for data ingestion. This capability ensures data stays within a specific region during processing and storage.
-### How do connection strings impact the billing?
+### How do connection strings affect the billing?
-Billing isn't impacted.
+Billing isn't affected.
### Microsoft Q&A
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Title: Monitor Node.js services with Azure Application Insights | Microsoft Docs
+ Title: Monitor Node.js services with Application Insights | Microsoft Docs
description: Monitor performance and diagnose problems in Node.js services with Application Insights. Last updated 10/12/2021
[Application Insights](./app-insights-overview.md) monitors your components after deployment to discover performance and other issues. You can use Application Insights for Node.js services that are hosted in your datacenter, Azure VMs and web apps, and even in other public clouds.
-To receive, store, and explore your monitoring data, include the SDK in your code, and then set up a corresponding Application Insights resource in Azure. The SDK sends data to that resource for further analysis and exploration.
+To receive, store, and explore your monitoring data, include the SDK in your code. Then set up a corresponding Application Insights resource in Azure. The SDK sends data to that resource for further analysis and exploration.
The Node.js client library can automatically monitor incoming and outgoing HTTP requests, exceptions, and some system metrics. Beginning in version 0.20, the client library also can monitor some common [third-party packages](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers#currently-supported-modules), like MongoDB, MySQL, and Redis. All events related to an incoming HTTP request are correlated for faster troubleshooting.
Before you begin, make sure that you have an Azure subscription, or [get a new o
### <a name="resource"></a> Set up an Application Insights resource 1. Sign in to the [Azure portal][portal].
-2. [Create an Application Insights resource](create-new-resource.md)
+1. Create an [Application Insights resource](create-new-resource.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ### <a name="sdk"></a> Set up the Node.js client library
-Include the SDK in your app, so it can gather data.
+Include the SDK in your app so that it can gather data.
1. Copy your resource's connection string from your new resource. Application Insights uses the connection string to map data to your Azure resource. Before the SDK can use your connection string, you must specify the connection string in an environment variable or in your code.
- :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot displaying Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
+ :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows the Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
-2. Add the Node.js client library to your app's dependencies via package.json. From the root folder of your app, run:
+1. Add the Node.js client library to your app's dependencies via `package.json`. From the root folder of your app, run:
```bash npm install applicationinsights --save ``` > [!NOTE]
- > If you are using TypeScript, do not install separate "typings" packages. This NPM package contains built-in typings.
+ > If you're using TypeScript, don't install separate "typings" packages. This NPM package contains built-in typings.
-3. Explicitly load the library in your code. Because the SDK injects instrumentation into many other libraries, load the library as early as possible, even before other `require` statements.
+1. Explicitly load the library in your code. Because the SDK injects instrumentation into many other libraries, load the library as early as possible, even before other `require` statements.
```javascript let appInsights = require('applicationinsights'); ```
-4. You also can provide a connection string via the environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`, instead of passing it manually to `setup()` or `new appInsights.TelemetryClient()`. This practice lets you keep connection strings out of committed source code, and you can specify different connection strings for different environments. To manually configure, call `appInsights.setup('[your connection string]');`.
+1. You also can provide a connection string via the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`, instead of passing it manually to `setup()` or `new appInsights.TelemetryClient()`. This practice lets you keep connection strings out of committed source code, and you can specify different connection strings for different environments. To manually configure, call `appInsights.setup('[your connection string]');`.
For more configuration options, see the following sections. You can try the SDK without sending telemetry by setting `appInsights.defaultClient.config.disableAppInsights = true`.
-5. Start automatically collecting and sending data by calling `appInsights.start();`.
+1. Start automatically collecting and sending data by calling `appInsights.start();`.
> [!NOTE]
-> As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You have the option to disable non-essential data collection. [Learn More](./statsbeat.md).
+> As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You have the option to disable non-essential data collection. [Learn more](./statsbeat.md).
### <a name="monitor"></a> Monitor your app
The SDK automatically gathers telemetry about the Node.js runtime and some commo
Then, in the [Azure portal][portal] go to the Application Insights resource that you created earlier. In the **Overview timeline**, look for your first few data points. To see more detailed data, select different components in the charts.
-To view the topology that is discovered for your app, you can use [Application map](app-map.md).
+To view the topology that's discovered for your app, you can use [Application Map](app-map.md).
#### No data
-Because the SDK batches data for submission, there might be a delay before items are displayed in the portal. If you don't see data in your resource, try some of the following fixes:
+Because the SDK batches data for submission, there might be a delay before items appear in the portal. If you don't see data in your resource, try some of the following fixes:
* Continue to use the application. Take more actions to generate more telemetry. * Select **Refresh** in the portal resource view. Charts periodically refresh on their own, but manually refreshing forces them to refresh immediately.
Because the SDK batches data for submission, there might be a delay before items
* Use [Search](./diagnostic-search.md) to look for specific events. * Check the [FAQ][FAQ].
-## Basic Usage
+## Basic usage
For out-of-the-box collection of HTTP requests, popular third-party library events, unhandled exceptions, and system metrics:
appInsights.setup("[your connection string]").start();
> [!NOTE] > If the connection string is set in the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`, `.setup()` can be called with no arguments. This makes it easy to use different connection strings for different environments.
-Load the Application Insights library, `require("applicationinsights")`, as early as possible in your scripts before loading other packages. This step is needed so that the Application Insights library can prepare later packages for tracking. If you encounter conflicts with other libraries doing similar preparation, try loading the Application Insights library afterwards.
+Load the Application Insights library `require("applicationinsights")` as early as possible in your scripts before you load other packages. This step is needed so that the Application Insights library can prepare later packages for tracking. If you encounter conflicts with other libraries doing similar preparation, try loading the Application Insights library afterwards.
-Because of the way JavaScript handles callbacks, more work is necessary to track a request across external dependencies and later callbacks. By default this extra tracking is enabled; disable it by calling `setAutoDependencyCorrelation(false)` as described in the [configuration](#sdk-configuration) section below.
+Because of the way JavaScript handles callbacks, more work is necessary to track a request across external dependencies and later callbacks. By default, this extra tracking is enabled. Disable it by calling `setAutoDependencyCorrelation(false)` as described in the [SDK configuration](#sdk-configuration) section.
-## Migrating from versions prior to 0.22
+## Migrate from versions prior to 0.22
There are breaking changes between releases prior to version 0.22 and after. These changes are designed to bring consistency with other Application Insights SDKs and allow future extensibility.
-In general, you can migrate with the following:
+In general, you can migrate with the following actions:
- Replace references to `appInsights.client` with `appInsights.defaultClient`.-- Replace references to `appInsights.getClient()` with `new appInsights.TelemetryClient()`
+- Replace references to `appInsights.getClient()` with `new appInsights.TelemetryClient()`.
- Replace all arguments to client.track* methods with a single object containing named properties as arguments. See your IDE's built-in type hinting or [TelemetryTypes](https://github.com/Microsoft/ApplicationInsights-node.js/tree/develop/Declarations/Contracts/TelemetryTypes) for the excepted object for each type of telemetry.
-If you access SDK configuration functions without chaining them to `appInsights.setup()`, you can now find these functions at `appInsights.Configurations` (for example, `appInsights.Configuration.setAutoCollectDependencies(true)`). Review the changes to the default configuration in the next section.
+If you access SDK configuration functions without chaining them to `appInsights.setup()`, you can now find these functions at `appInsights.Configurations`. An example is `appInsights.Configuration.setAutoCollectDependencies(true)`. Review the changes to the default configuration in the next section.
## SDK configuration
appInsights.setup("<connection_string>")
To fully correlate events in a service, be sure to set `.setAutoDependencyCorrelation(true)`. With this option set, the SDK can track context across asynchronous callbacks in Node.js.
-Review their descriptions in your IDE's built-in type hinting, or [applicationinsights.ts](https://github.com/microsoft/ApplicationInsights-node.js/blob/develop/applicationinsights.ts) for detailed information and optional secondary arguments.
+Review their descriptions in your IDE's built-in type hinting or [applicationinsights.ts](https://github.com/microsoft/ApplicationInsights-node.js/blob/develop/applicationinsights.ts) for detailed information and optional secondary arguments.
> [!NOTE]
-> By default `setAutoCollectConsole` is configured to *exclude* calls to `console.log` (and other console methods). Only calls to supported third-party loggers (for example, winston and bunyan) will be collected. You can change this behavior to include calls to `console` methods by using `setAutoCollectConsole(true, true)`.
+> By default, `setAutoCollectConsole` is configured to *exclude* calls to `console.log` and other console methods. Only calls to supported third-party loggers (for example, winston and bunyan) will be collected. You can change this behavior to include calls to `console` methods by using `setAutoCollectConsole(true, true)`.
### Sampling
-By default, the SDK will send all collected data to the Application Insights service. If you want to enable sampling to reduce the amount of data, set the `samplingPercentage` field on the `config` object of a client. Setting `samplingPercentage` to 100(the default) means all data will be sent and 0 means nothing will be sent.
+By default, the SDK will send all collected data to the Application Insights service. If you want to enable sampling to reduce the amount of data, set the `samplingPercentage` field on the `config` object of a client. Setting `samplingPercentage` to 100 (the default) means all data will be sent, and 0 means nothing will be sent.
If you're using automatic correlation, all data associated with a single request will be included or excluded as a unit.
appInsights.defaultClient.config.samplingPercentage = 33; // 33% of all telemetr
appInsights.start(); ```
-### Multiple roles for multi-components applications
+### Multiple roles for multi-component applications
-If your application consists of multiple components that you wish to instrument all with the same connection string and still see these components as separate units in the portal, as if they were using separate connection strings (for example, as separate nodes on the Application Map), you may need to manually configure the RoleName field to distinguish one component's telemetry from other components sending data to your Application Insights resource.
+In some scenarios, your application might consist of multiple components that you want to instrument all with the same connection string. You want to still see these components as separate units in the portal, as if they were using separate connection strings. An example is separate nodes on Application Map. You need to manually configure the `RoleName` field to distinguish one component's telemetry from other components that send data to your Application Insights resource.
-Use the following to set the RoleName field:
+Use the following code to set the `RoleName` field:
```javascript const appInsights = require("applicationinsights");
appInsights.defaultClient.context.tags[appInsights.defaultClient.context.keys.cl
appInsights.start(); ```
-### Automatic web snippet injection (Preview)
+### Automatic web snippet injection (preview)
-Automatic web snippet injection allows you to enable [Application Insights Usage Experiences](usage-overview.md) and Browser Diagnostic Experiences with a simple configuration. It provides an easier alternative to manually adding the JavaScript snippet or NPM package to your JavaScript web code. For node server with configuration, set `enableAutoWebSnippetInjection` to `true` or alternatively set environment variable `APPLICATIONINSIGHTS_WEB_SNIPPET_ENABLED = true`. Automatic web snippet injection is available in Application Insights Node.js SDK version 2.3.0 or greater. See [Application Insights Node.js GitHub Readme](https://github.com/microsoft/ApplicationInsights-node.js#automatic-web-snippet-injectionpreview) for more information.
+You can use automatic web snippet injection to enable [Application Insights usage experiences](usage-overview.md) and browser diagnostic experiences with a simple configuration. It's an easier alternative to manually adding the JavaScript snippet or npm package to your JavaScript web code.
+
+For node server with configuration, set `enableAutoWebSnippetInjection` to `true`. Alternatively, set the environment variable as `APPLICATIONINSIGHTS_WEB_SNIPPET_ENABLED = true`. Automatic web snippet injection is available in Application Insights Node.js SDK version 2.3.0 or greater. For more information, see [Application Insights Node.js GitHub Readme](https://github.com/microsoft/ApplicationInsights-node.js#automatic-web-snippet-injectionpreview).
### Automatic third-party instrumentation
-In order to track context across asynchronous calls, some changes are required in third party libraries such as MongoDB and Redis. By default, Application Insights will use [`diagnostic-channel-publishers`](https://github.com/Microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers) to monkey-patch some of these libraries. This feature can be disabled by setting the `APPLICATION_INSIGHTS_NO_DIAGNOSTIC_CHANNEL` environment variable.
+To track context across asynchronous calls, some changes are required in third-party libraries, such as MongoDB and Redis. By default, Application Insights will use [`diagnostic-channel-publishers`](https://github.com/Microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers) to monkey-patch some of these libraries. This feature can be disabled by setting the `APPLICATION_INSIGHTS_NO_DIAGNOSTIC_CHANNEL` environment variable.
> [!NOTE]
-> By setting that environment variable, events may no longer be correctly associated with the right operation.
+> By setting that environment variable, events might not be correctly associated with the right operation.
- Individual monkey-patches can be disabled by setting the `APPLICATION_INSIGHTS_NO_PATCH_MODULES` environment variable to a comma separated list of packages to disable (for example, `APPLICATION_INSIGHTS_NO_PATCH_MODULES=console,redis`) to avoid patching the `console` and `redis` packages.
+ Individual monkey patches can be disabled by setting the `APPLICATION_INSIGHTS_NO_PATCH_MODULES` environment variable to a comma-separated list of packages to disable. For example, use `APPLICATION_INSIGHTS_NO_PATCH_MODULES=console,redis` to avoid patching the `console` and `redis` packages.
-Currently there are nine packages that are instrumented: `bunyan`,`console`,`mongodb`,`mongodb-core`,`mysql`,`redis`,`winston`,`pg`, and `pg-pool`. Visit the [diagnostic-channel-publishers' README](https://github.com/Microsoft/node-diagnostic-channel/blob/master/src/diagnostic-channel-publishers/README.md) for information about exactly which version of these packages are patched.
+Currently, nine packages are instrumented: `bunyan`,`console`,`mongodb`,`mongodb-core`,`mysql`,`redis`,`winston`,`pg`, and `pg-pool`. For information about exactly which version of these packages are patched, see the [diagnostic-channel-publishers' README](https://github.com/Microsoft/node-diagnostic-channel/blob/master/src/diagnostic-channel-publishers/README.md).
-The `bunyan`, `winston`, and `console` patches will generate Application Insights trace events based on whether `setAutoCollectConsole` is enabled. The rest will generate Application Insights Dependency events based on whether `setAutoCollectDependencies` is enabled.
+The `bunyan`, `winston`, and `console` patches will generate Application Insights trace events based on whether `setAutoCollectConsole` is enabled. The rest will generate Application Insights dependency events based on whether `setAutoCollectDependencies` is enabled.
-### Live Metrics
+### Live metrics
-To enable sending Live Metrics from your app to Azure, use `setSendLiveMetrics(true)`. Filtering of live metrics in the portal is currently not supported.
+To enable sending live metrics from your app to Azure, use `setSendLiveMetrics(true)`. Currently, filtering of live metrics in the portal isn't supported.
### Extended metrics > [!NOTE]
-> The ability to send extended native metrics was added in version 1.4.0
+> The ability to send extended native metrics was added in version 1.4.0.
To enable sending extended native metrics from your app to Azure, install the separate native metrics package. The SDK will automatically load when it's installed and start collecting Node.js native metrics.
Currently, the native metrics package performs autocollection of garbage collect
- **Garbage collection**: The amount of CPU time spent on each type of garbage collection, and how many occurrences of each type. - **Event loop**: How many ticks occurred and how much CPU time was spent in total.-- **Heap vs non-heap**: How much of your app's memory usage is in the heap or non-heap.
+- **Heap vs. non-heap**: How much of your app's memory usage is in the heap or non-heap.
-### Distributed Tracing modes
+### Distributed tracing modes
-By default, the SDK will send headers understood by other applications/services instrumented with an Application Insights SDK. You can enable sending/receiving of [W3C Trace Context](https://github.com/w3c/trace-context) headers in addition to the existing AI headers, so you won't break correlation with any of your existing legacy services. Enabling W3C headers will allow your app to correlate with other services not instrumented with Application Insights, but do adopt this W3C standard.
+By default, the SDK will send headers understood by other applications or services instrumented with an Application Insights SDK. You can enable sending and receiving of [W3C Trace Context](https://github.com/w3c/trace-context) headers in addition to the existing AI headers. In this way, you won't break correlation with any of your existing legacy services. Enabling W3C headers will allow your app to correlate with other services not instrumented with Application Insights but that do adopt this W3C standard.
```Javascript const appInsights = require("applicationinsights");
appInsights.defaultClient.commonProperties = {
Use the following code to manually track HTTP GET requests: > [!NOTE]
-> All requests are tracked by default. To disable automatic collection, call .setAutoCollectRequests(false) before calling start().
+> All requests are tracked by default. To disable automatic collection, call `.setAutoCollectRequests(false)` before calling `start()`.
```javascript appInsights.defaultClient.trackRequest({name:"GET /customers", url:"http://myserver/customers", duration:309, resultCode:200, success:true}); ```
-Alternatively you can track requests using `trackNodeHttpRequest` method:
+Alternatively, you can track requests by using the `trackNodeHttpRequest` method:
```javascript var server = http.createServer((req, res) => {
server.on("listening", () => {
### Flush
-By default, telemetry is buffered for 15 seconds before it's sent to the ingestion server. If your application has a short lifespan, such as a CLI tool, it might be necessary to manually flush your buffered telemetry when application terminates, `appInsights.defaultClient.flush()`.
+By default, telemetry is buffered for 15 seconds before it's sent to the ingestion server. If your application has a short lifespan, such as a CLI tool, it might be necessary to manually flush your buffered telemetry when the application terminates by using `appInsights.defaultClient.flush()`.
-If the SDK detects that your application is crashing, it will call flush for you, `appInsights.defaultClient.flush({ isAppCrashing: true })`. With the flush option `isAppCrashing`, your application is assumed to be in an abnormal state, not suitable for sending telemetry. Instead, the SDK will save all buffered telemetry to [persistent storage](./data-retention-privacy.md#nodejs) and let your application terminate. When your application starts again, it will try to send any telemetry that was saved to persistent storage.
+If the SDK detects that your application is crashing, it will call flush for you by using `appInsights.defaultClient.flush({ isAppCrashing: true })`. With the flush option `isAppCrashing`, your application is assumed to be in an abnormal state and isn't suitable to send telemetry. Instead, the SDK will save all buffered telemetry to [persistent storage](./data-retention-privacy.md#nodejs) and let your application terminate. When your application starts again, it will try to send any telemetry that was saved to persistent storage.
### Preprocess data with telemetry processors
-You can process and filter collected data before it's sent for retention using *Telemetry Processors*. Telemetry processors are called one by one in the order they were added before the telemetry item is sent to the cloud.
+You can process and filter collected data before it's sent for retention by using *telemetry processors*. Telemetry processors are called one by one in the order they were added before the telemetry item is sent to the cloud.
```javascript public addTelemetryProcessor(telemetryProcessor: (envelope: Contracts.Envelope, context: { http.RequestOptions, http.ClientRequest, http.ClientResponse, correlationContext }) => boolean) ```
-If a telemetry processor returns false, that telemetry item won't be sent.
+If a telemetry processor returns `false`, that telemetry item won't be sent.
-All telemetry processors receive the telemetry data and its envelope to inspect and modify. They also receive a context object. The contents of this object is defined by the `contextObjects` parameter when calling a track method for manually tracked telemetry. For automatically collected telemetry, this object is filled with available request information and the persistent request content as provided by `appInsights.getCorrelationContext()` (if automatic dependency correlation is enabled).
+All telemetry processors receive the telemetry data and its envelope to inspect and modify. They also receive a context object. The contents of this object are defined by the `contextObjects` parameter when calling a track method for manually tracked telemetry. For automatically collected telemetry, this object is filled with available request information and the persistent request content as provided by `appInsights.getCorrelationContext()` (if automatic dependency correlation is enabled).
The TypeScript type for a telemetry processor is:
otherClient.trackEvent({name: "my custom event"});
## Advanced configuration options
-The client object contains a `config` property with many optional settings for advanced scenarios. These can be set as follows:
+The client object contains a `config` property with many optional settings for advanced scenarios. To set them, use:
```javascript client.config.PROPERTYNAME = VALUE;
These properties are client specific, so you can configure `appInsights.defaultC
| connectionString | An identifier for your Application Insights resource. | | endpointUrl | The ingestion endpoint to send telemetry payloads to. | | quickPulseHost | The Live Metrics Stream host to send live metrics telemetry to. |
-| proxyHttpUrl | A proxy server for SDK HTTP traffic (Optional, Default pulled from `http_proxy` environment variable). |
-| proxyHttpsUrl | A proxy server for SDK HTTPS traffic (Optional, Default pulled from `https_proxy` environment variable). |
-| httpAgent | An http.Agent to use for SDK HTTP traffic (Optional, Default undefined). |
-| httpsAgent | An https.Agent to use for SDK HTTPS traffic (Optional, Default undefined). |
-| maxBatchSize | The maximum number of telemetry items to include in a payload to the ingestion endpoint (Default `250`). |
-| maxBatchIntervalMs | The maximum amount of time to wait to for a payload to reach maxBatchSize (Default `15000`). |
-| disableAppInsights | A flag indicating if telemetry transmission is disabled (Default `false`). |
-| samplingPercentage | The percentage of telemetry items tracked that should be transmitted (Default `100`). |
-| correlationIdRetryIntervalMs | The time to wait before retrying to retrieve the ID for cross-component correlation (Default `30000`). |
-| correlationHeaderExcludedDomains| A list of domains to exclude from cross-component correlation header injection (Default See [Config.ts](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Config.ts)).|
+| proxyHttpUrl | A proxy server for SDK HTTP traffic. (Optional. Default is pulled from `http_proxy` environment variable.) |
+| proxyHttpsUrl | A proxy server for SDK HTTPS traffic. (Optional. Default is pulled from `https_proxy` environment variable.) |
+| httpAgent | An http.Agent to use for SDK HTTP traffic. (Optional. Default is undefined.) |
+| httpsAgent | An https.Agent to use for SDK HTTPS traffic. (Optional. Default is undefined.) |
+| maxBatchSize | The maximum number of telemetry items to include in a payload to the ingestion endpoint. (Default is `250`.) |
+| maxBatchIntervalMs | The maximum amount of time to wait for a payload to reach maxBatchSize. (Default is `15000`.) |
+| disableAppInsights | A flag indicating if telemetry transmission is disabled. (Default is `false`.) |
+| samplingPercentage | The percentage of telemetry items tracked that should be transmitted. (Default is `100`.) |
+| correlationIdRetryIntervalMs | The time to wait before retrying to retrieve the ID for cross-component correlation. (Default is `30000`.) |
+| correlationHeaderExcludedDomains| A list of domains to exclude from cross-component correlation header injection. (Default. See [Config.ts](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Config.ts).)|
## Next steps
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
Title: Azure Application Insights Transaction Diagnostics | Microsoft Docs
-description: Application Insights end-to-end transaction diagnostics
+ Title: Application Insights transaction diagnostics | Microsoft Docs
+description: This article explains Application Insights end-to-end transaction diagnostics.
Last updated 01/19/2018
The unified diagnostics experience automatically correlates server-side telemetry from across all your Application Insights monitored components into a single view. It doesn't matter if you have multiple resources. Application Insights detects the underlying relationship and allows you to easily diagnose the application component, dependency, or exception that caused a transaction slowdown or failure.
-## What is a Component?
+## What is a component?
-Components are independently deployable parts of your distributed/microservices application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.
+Components are independently deployable parts of your distributed or microservice application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.
-* Components are different from "observed" external dependencies such as SQL, Event Hubs etc. which your team/organization may not have access to (code or telemetry).
-* Components run on any number of server/role/container instances.
-* Components can be separate Application Insights instrumentation keys (even if subscriptions are different) or different roles reporting to a single Application Insights instrumentation key. The new experience shows details across all components, regardless of how they have been set up.
+* Components are different from "observed" external dependencies, such as SQL and event hubs, which your team or organization might not have access to (code or telemetry).
+* Components run on any number of server, role, or container instances.
+* Components can be separate Application Insights instrumentation keys, even if subscriptions are different. Components also can be different roles that report to a single Application Insights instrumentation key. The new experience shows details across all components, regardless of how they were set up.
> [!NOTE]
-> * **Missing the related item links?** All of the related telemetry are in the [top](#cross-component-transaction-chart) and [bottom](#all-telemetry-with-this-operation-id) sections of the left side.
+> Are you missing the related item links? All the related telemetry is on the left side in the [top](#cross-component-transaction-chart) and [bottom](#all-telemetry-with-this-operation-id) sections.
## Transaction diagnostics experience
-This view has four key parts: results list, a cross-component transaction chart, a time-sequence list of all telemetry related to this operation, and the details pane for any selected telemetry item on the left.
-![Key parts](media/transaction-diagnostics/4partsCrossComponent.png)
+This view has four key parts: a results list, a cross-component transaction chart, a time-sequence list of all telemetry related to this operation, and the details pane for any selected telemetry item on the left.
+
+![Screenshot that shows the four key parts of the view.](media/transaction-diagnostics/4partsCrossComponent.png)
## Cross-component transaction chart This chart provides a timeline with horizontal bars during requests and dependencies across components. Any exceptions that are collected are also marked on the timeline.
-* The top row on this chart represents the entry point, the incoming request to the first component called in this transaction. The duration is the total time taken for the transaction to complete.
-* Any calls to external dependencies are simple non-collapsible rows, with icons representing the dependency type.
+* The top row on this chart represents the entry point. It's the incoming request to the first component called in this transaction. The duration is the total time taken for the transaction to complete.
+* Any calls to external dependencies are simple noncollapsible rows, with icons that represent the dependency type.
* Calls to other components are collapsible rows. Each row corresponds to a specific operation invoked at the component.
-* By default, the request, dependency, or exception that you selected is displayed on the right side.
-* Select any row to see its [details on the right](#details-of-the-selected-telemetry).
+* By default, the request, dependency, or exception that you selected appears on the right side.
+* Select any row to see its [details on the right](#details-of-the-selected-telemetry).
> [!NOTE]
-> Calls to other components have two rows: one row represents the outbound call (dependency) from the caller component, and the other row corresponds to the inbound request at the called component. The leading icon and distinct styling of the duration bars help differentiate between them.
+> Calls to other components have two rows. One row represents the outbound call (dependency) from the caller component. The other row corresponds to the inbound request at the called component. The leading icon and distinct styling of the duration bars help differentiate between them.
## All telemetry with this Operation ID
-This section shows flat list view in a time sequence of all the telemetry related to this transaction. It also shows the custom events, and traces that aren't displayed in the transaction chart. You can filter this list to telemetry generated by a specific component/call. You can select any telemetry item in this list to see corresponding [details on the right](#details-of-the-selected-telemetry).
+This section shows a flat list view in a time sequence of all the telemetry related to this transaction. It also shows the custom events and traces that aren't displayed in the transaction chart. You can filter this list to telemetry generated by a specific component or call. You can select any telemetry item in this list to see corresponding [details on the right](#details-of-the-selected-telemetry).
-![Time sequence of all telemetry](media/transaction-diagnostics/allTelemetryDrawerOpened.png)
+![Screenshot that shows the time sequence of all telemetry.](media/transaction-diagnostics/allTelemetryDrawerOpened.png)
## Details of the selected telemetry
-This collapsible pane shows the detail of any selected item from the transaction chart, or the list. "Show all" lists all of the standard attributes that are collected. Any custom attributes are separately listed below the standard set. Select the "..." below the stack trace window to get an option to copy the trace. "Open profiler traces" or "Open debug snapshot" shows code level diagnostics in corresponding detail panes.
+This collapsible pane shows the detail of any selected item from the transaction chart or the list. **Show all** lists all the standard attributes that are collected. Any custom attributes are listed separately under the standard set. Select the ellipsis button (...) under the **Call Stack** trace window to get an option to copy the trace. **Open profiler traces** and **Open debug snapshot** show code-level diagnostics in corresponding detail panes.
-![Exception detail](media/transaction-diagnostics/exceptiondetail.png)
+![Screenshot that shows exception details.](media/transaction-diagnostics/exceptiondetail.png)
## Search results
-This collapsible pane shows the other results that meet the filter criteria. Select any result to update the respective details the three sections listed above. We try to find samples that are most likely to have the details available from all components even if sampling is in effect in any of them. These are shown as "suggested" samples.
+This collapsible pane shows the other results that meet the filter criteria. Select any result to update the respective details of the preceding three sections. We try to find samples that are most likely to have the details available from all components, even if sampling is in effect in any of them. These samples are shown as suggestions.
-![Search results](media/transaction-diagnostics/searchResults.png)
+![Screenshot that shows search results.](media/transaction-diagnostics/searchResults.png)
-## Profiler and snapshot debugger
+## Profiler and Snapshot Debugger
-[Application Insights profiler](./profiler.md) or [snapshot debugger](snapshot-debugger.md) help with code-level diagnostics of performance and failure issues. With this experience, you can see profiler traces or snapshots from any component with a single selection.
+[Application Insights Profiler](./profiler.md) or [Snapshot Debugger](snapshot-debugger.md) help with code-level diagnostics of performance and failure issues. With this experience, you can see Profiler traces or snapshots from any component with a single selection.
-If you couldn't get Profiler working, contact **serviceprofilerhelp\@microsoft.com**
+If you can't get Profiler working, contact serviceprofilerhelp\@microsoft.com.
-If you couldn't get Snapshot Debugger working, contact **snapshothelp\@microsoft.com**
+If you can't get Snapshot Debugger working, contact snapshothelp\@microsoft.com.
-![Profiler Integration](media/transaction-diagnostics/profilerTraces.png)
+![Screenshot that shows Profiler integration.](media/transaction-diagnostics/profilerTraces.png)
## FAQ
-*I see a single component on the chart, and the others are only showing as external dependencies without any detail of what happened within those components.*
+This section provides answers to common questions.
+
+### Why do I see a single component on the chart and the other components only show as external dependencies without any details?
Potential reasons: * Are the other components instrumented with Application Insights? * Are they using the latest stable Application Insights SDK?
-* If these components are separate Application Insights resources, do you have required [access](resources-roles-access-control.md)
-If you do have access and the components are instrumented with the latest Application Insights SDKs, let us know via the top right feedback channel.
+* If these components are separate Application Insights resources, do you have required [access](resources-roles-access-control.md)?
+If you do have access and the components are instrumented with the latest Application Insights SDKs, let us know via the feedback channel in the upper-right corner.
+
+### I see duplicate rows for the dependencies. Is this behavior expected?
-*I see duplicate rows for the dependencies. Is this expected?*
+Currently, we're showing the outbound dependency call separate from the inbound request. Typically, the two calls look identical with only the duration value being different because of the network round trip. The leading icon and distinct styling of the duration bars help differentiate between them. Is this presentation of the data confusing? Give us your feedback!
-At this time, we're showing the outbound dependency call separate from the inbound request. Typically, the two calls look identical with only the duration value being different due to the network round trip. The leading icon and distinct styling of the duration bars help differentiate between them. Is this presentation of the data confusing? Give us your feedback!
+### What about clock skews across different component instances?
-*What about clock skews across different component instances?*
+Timelines are adjusted for clock skews in the transaction chart. You can see the exact timestamps in the details pane or by using Log Analytics.
-Timelines are adjusted for clock skews in the transaction chart. You can see the exact timestamps in the details pane or by using Analytics.
+### Why is the new experience missing most of the related items queries?
-*Why is the new experience missing most of the related items queries?*
+This behavior is by design. All the related items, across all components, are already available on the left side in the top and bottom sections. The new experience has two related items that the left side doesn't cover: all telemetry from five minutes before and after this event and the user timeline.
-This is by design. All of the related items, across all components, are already available on the left side (top and bottom sections). The new experience has two related items that the left side doesn't cover: all telemetry from five minutes before and after this event and the user timeline.
+### Is there a way to see fewer events per transaction when I use the Application Insights JavaScript SDK?
-*I see more events than expected in the transaction diagnostics experience when using the Application Insights JavaScript SDK. Is there a way to see fewer events per transaction?*
+The transaction diagnostics experience shows all telemetry in a [single operation](correlation.md#data-model-for-telemetry-correlation) that shares an [Operation ID](data-model-context.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a single-page application (SPA), only one page view event will be generated and a single Operation ID will be used for all telemetry generated. As a result, many events might be correlated to the same operation.
-The transaction diagnostics experience shows all telemetry in a [single operation](correlation.md#data-model-for-telemetry-correlation) that share an [Operation ID](data-model-context.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a Single Page Application (SPA), only one page view event will be generated and a single Operation ID will be used for all telemetry generated, this can result in many events being correlated to the same operation. In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your single page app. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation ID, you can do so by calling `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event will also reset the Operation ID.
+In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your SPA. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so that a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation ID, call `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event also resets the Operation ID.
-*Why do transaction detail durations not add up to the top-request duration?*
+### Why do transaction detail durations not add up to the top-request duration?
-Time not explained in the gantt chart, is time that isn't covered by a tracked dependency.
-This can be due to either external calls that weren't instrumented (automatically or manually), or that the time taken was in process rather than because of an external call.
+Time not explained in the Gantt chart is time that isn't covered by a tracked dependency. This issue can occur because external calls weren't instrumented, either automatically or manually. It can also occur because the time taken was in process rather than because of an external call.
If all calls were instrumented, in process is the likely root cause for the time spent. A useful tool for diagnosing the process is the [Application Insights profiler](./profiler.md).
azure-monitor Tutorial App Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-app-dashboards.md
Title: Create custom dashboards in Azure Application Insights | Microsoft Docs
-description: Tutorial to create custom KPI dashboards using Azure Application Insights.
+ Title: Create custom dashboards in Application Insights | Microsoft Docs
+description: This tutorial shows you how to create custom KPI dashboards using Application Insights.
Last updated 09/30/2020
-# Create custom KPI dashboards using Azure Application Insights
+# Create custom KPI dashboards using Application Insights
-You can create multiple dashboards in the Azure portal that each include tiles visualizing data from multiple Azure resources across different resource groups and subscriptions. You can pin different charts and views from Azure Application Insights to create custom dashboards that provide you with complete picture of the health and performance of your application. This tutorial walks you through the creation of a custom dashboard that includes multiple types of data and visualizations from Azure Application Insights.
+You can create multiple dashboards in the Azure portal that include tiles visualizing data from multiple Azure resources across different resource groups and subscriptions. You can pin different charts and views from Application Insights to create custom dashboards that provide you with a complete picture of the health and performance of your application. This tutorial walks you through the creation of a custom dashboard that includes multiple types of data and visualizations from Application Insights.
You learn how to: > [!div class="checklist"]
-> * Create a custom dashboard in Azure
-> * Add a tile from the Tile Gallery
-> * Add standard metrics in Application Insights to the dashboard
-> * Add a custom metric chart Application Insights to the dashboard
-> * Add the results of a Logs (Analytics) query to the dashboard
+> * Create a custom dashboard in Azure.
+> * Add a tile from the **Tile Gallery**.
+> * Add standard metrics in Application Insights to the dashboard.
+> * Add a custom metric chart based on Application Insights to the dashboard.
+> * Add the results of a Log Analytics query to the dashboard.
## Prerequisites To complete this tutorial: -- Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md).
+- Deploy a .NET application to Azure.
+- Enable the [Application Insights SDK](../app/asp-net.md).
> [!NOTE] > Required permissions for working with dashboards are discussed in the article on [understanding access control for dashboards](../../azure-portal/azure-portal-dashboard-share-access.md#understanding-access-control-for-dashboards). ## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a new dashboard > [!WARNING]
-> If you move your Application Insights resource over to a different resource group or subscription, you will need to manually update the dashboard by removing the old tiles and pinning new tiles from the same Application Insights resource at new location.
+> If you move your Application Insights resource over to a different resource group or subscription, you'll need to manually update the dashboard by removing the old tiles and pinning new tiles from the same Application Insights resource at the new location.
-A single dashboard can contain resources from multiple applications, resource groups, and subscriptions. Start the tutorial by creating a new dashboard for your application.
+A single dashboard can contain resources from multiple applications, resource groups, and subscriptions. Start the tutorial by creating a new dashboard for your application.
-1. In the menu dropdown on the left in Azure portal, select **Dashboard**.
+1. In the menu dropdown on the left in the Azure portal, select **Dashboard**.
- ![Azure Portal menu dropdown](media/tutorial-app-dashboards/dashboard-from-menu.png)
+ ![Screenshot that shows the Azure portal menu dropdown.](media/tutorial-app-dashboards/dashboard-from-menu.png)
-2. On the dashboard pane, select **New dashboard** then **Blank dashboard**.
+1. On the **Dashboard** pane, select **New dashboard** > **Blank dashboard**.
- ![New dashboard](media/tutorial-app-dashboards/new-dashboard.png)
+ ![Screenshot that shows the Dashboard pane.](media/tutorial-app-dashboards/new-dashboard.png)
-3. Type a name for the dashboard.
-4. Have a look at the **Tile Gallery** for a variety of tiles that you can add to your dashboard. In addition to adding tiles from the gallery, you can pin charts and other views directly from Application Insights to the dashboard.
-5. Locate the **Markdown** tile and drag it on to your dashboard. This tile allows you to add text formatted in markdown, which is ideal for adding descriptive text to your dashboard. To learn more, see [Use a markdown tile on Azure dashboards to show custom content](../../azure-portal/azure-portal-markdown-tile.md).
-6. Add text to the tile's properties and resize it on the dashboard canvas.
+1. Enter a name for the dashboard.
+1. Look at the **Tile Gallery** for various tiles that you can add to your dashboard. You can also pin charts and other views directly from Application Insights to the dashboard.
+1. Locate the **Markdown** tile and drag it on to your dashboard. With this tile, you can add text formatted in Markdown, which is ideal for adding descriptive text to your dashboard. To learn more, see [Use a Markdown tile on Azure dashboards to show custom content](../../azure-portal/azure-portal-markdown-tile.md).
+1. Add text to the tile's properties and resize it on the dashboard canvas.
- [![Edit markdown tile](media/tutorial-app-dashboards/markdown.png)](media/tutorial-app-dashboards/markdown.png#lightbox)
+ [![Screenshot that shows the Edit Markdown tile.](media/tutorial-app-dashboards/markdown.png)](media/tutorial-app-dashboards/markdown.png#lightbox)
-7. Select **Done customizing** at the top of the screen to exit tile customization mode.
+1. Select **Done customizing** at the top of the screen to exit tile customization mode.
## Add health overview
-A dashboard with static text isn't very interesting, so now add a tile from Application Insights to show information about your application. You can add Application Insights tiles from the Tile Gallery, or you can pin them directly from Application Insights screens. This allows you to configure charts and views that you're already familiar with before pinning them to your dashboard. Start by adding the standard health overview for your application. This requires no configuration and allows minimal customization in the dashboard.
+A dashboard with static text isn't very interesting, so add a tile from Application Insights to show information about your application. You can add Application Insights tiles from the **Tile Gallery**. You can also pin them directly from Application Insights screens. In this way, you can configure charts and views that you're already familiar with before you pin them to your dashboard.
+Start by adding the standard health overview for your application. This tile requires no configuration and allows minimal customization in the dashboard.
1. Select your **Application Insights** resource on the home screen.
-2. In the **Overview** pane, select the pin icon ![pin icon](media/tutorial-app-dashboards/pushpin.png) to add the tile to a dashboard.
-3. In the "Pin to dashboard" tab, select which dashboard to add the tile to or create a new one.
-
-3. In the top right, a notification will appear that your tile was pinned to your dashboard. Select **Pinned to dashboard** in the notification to return to your dashboard or use the dashboard pane.
-4. That tile is now added to your dashboard. Select **Edit** to change the positioning of the tile. Select and drag it into position and then select **Done customizing**. Your dashboard now has a tile with some useful information.
+1. On the **Overview** pane, select the pin icon ![pin icon](media/tutorial-app-dashboards/pushpin.png) to add the tile to a dashboard.
+1. On the **Pin to dashboard** tab, select which dashboard to add the tile to or create a new one.
+1. At the top right, a notification appears that your tile was pinned to your dashboard. Select **Pinned to dashboard** in the notification to return to your dashboard or use the **Dashboard** pane.
+1. Select **Edit** to change the positioning of the tile you added to your dashboard. Select and drag it into position and then select **Done customizing**. Your dashboard now has a tile with some useful information.
- [![Dashboard in edit mode](media/tutorial-app-dashboards/dashboard-edit-mode.png)](media/tutorial-app-dashboards/dashboard-edit-mode.png#lightbox)
+ [![Screenshot that shows the dashboard in edit mode.](media/tutorial-app-dashboards/dashboard-edit-mode.png)](media/tutorial-app-dashboards/dashboard-edit-mode.png#lightbox)
## Add custom metric chart
-The **Metrics** panel allows you to graph a metric collected by Application Insights over time with optional filters and grouping. Like everything else in Application Insights, you can add this chart to the dashboard. This does require you to do a little customization first.
+You can use the **Metrics** panel to graph a metric collected by Application Insights over time with optional filters and grouping. Like everything else in Application Insights, you can add this chart to the dashboard. This step does require you to do a little customization first.
-1. Select your **Application Insights** resource in the home screen.
-1. Select **Metrics**.
-2. An empty chart has already been created, and you're prompted to add a metric. Add a metric to the chart and optionally add a filter and a grouping. The example below shows the number of server requests grouped by success. This gives a running view of successful and unsuccessful requests.
+1. Select your **Application Insights** resource on the home screen.
+1. Select **Metrics**.
+1. An empty chart appears, and you're prompted to add a metric. Add a metric to the chart and optionally add a filter and a grouping. The following example shows the number of server requests grouped by success. This chart gives a running view of successful and unsuccessful requests.
- [![Add metric](media/tutorial-app-dashboards/metrics.png)](media/tutorial-app-dashboards/metrics.png#lightbox)
+ [![Screenshot that shows adding a metric.](media/tutorial-app-dashboards/metrics.png)](media/tutorial-app-dashboards/metrics.png#lightbox)
-4. Select **Pin to dashboard** on the right.
+1. Select **Pin to dashboard** on the right.
-3. In the top right, a notification will appear that your tile was pinned to your dashboard. Select **Pinned to dashboard** in the notification to return to your dashboard or use the dashboard tab.
+1. In the top right, a notification appears that your tile was pinned to your dashboard. Select **Pinned to dashboard** in the notification to return to your dashboard or use the dashboard tab.
-4. That tile is now added to your dashboard. Select **Edit** to change the positioning of the tile. Select and drag the tile into position and then select **Done customizing**.
+1. That tile is now added to your dashboard. Select **Edit** to change the positioning of the tile. Select and drag the tile into position and then select **Done customizing**.
-## Add Logs query
+## Add a logs query
-Azure Application Insights Logs provides a rich query language that allows you to analyze all of the data collected Application Insights. Just like charts and other views, you can add the output of a logs query to your dashboard.
+Application Insights Logs provides a rich query language that you can use to analyze all the data collected by Application Insights. Like with charts and other views, you can add the output of a logs query to your dashboard.
1. Select your **Application Insights** resource in the home screen.
-2. Select **Logs** on the left under "monitoring" to open the Logs tab.
-3. Type the following query, which returns the top 10 most requested pages and their request count:
+1. On the left under **Monitoring**, select **Logs** to open the **Logs** tab.
+1. Enter the following query, which returns the top 10 most requested pages and their request count:
``` Kusto requests
Azure Application Insights Logs provides a rich query language that allows you t
| take 10 ```
-4. Select **Run** to validate the results of the query.
-5. Select the pin icon ![Pin icon](media/tutorial-app-dashboards/pushpin.png) and select the name of your dashboard.
+1. Select **Run** to validate the results of the query.
+1. Select the pin icon ![Pin icon](media/tutorial-app-dashboards/pushpin.png) and then select the name of your dashboard.
-5. Before you go back to the dashboard, add another query, but render it as a chart so you see the different ways to visualize a logs query in a dashboard. Start with the following query that summarizes the top 10 operations with the most exceptions.
+1. Before you go back to the dashboard, add another query, but render it as a chart. Now you'll see the different ways to visualize a logs query in a dashboard. Start with the following query that summarizes the top 10 operations with the most exceptions:
``` Kusto exceptions
Azure Application Insights Logs provides a rich query language that allows you t
| take 10 ```
-6. Select **Chart** and then change to a **Doughnut** to visualize the output.
+1. Select **Chart** and then select **Doughnut** to visualize the output.
- [![Doughnut chart with above query](media/tutorial-app-dashboards/logs-doughnut.png)](media/tutorial-app-dashboards/logs-doughnut.png#lightbox)
+ [![Screenshot that shows the doughnut chart with the preceding query.](media/tutorial-app-dashboards/logs-doughnut.png)](media/tutorial-app-dashboards/logs-doughnut.png#lightbox)
-6. Select the pin icon ![Pin icon](media/tutorial-app-dashboards/pushpin.png) on the top right to pin the chart to your dashboard and then return to your dashboard.
-7. The results of the queries are now added to your dashboard in the format that you selected. Select and drag each into position and then select **Done customizing**.
-8. Select the pencil icon ![Pencil icon](media/tutorial-app-dashboards/pencil.png) on each title to give them a descriptive title.
+1. Select the pin icon ![Pin icon](media/tutorial-app-dashboards/pushpin.png) at the top right to pin the chart to your dashboard. Then return to your dashboard.
+1. The results of the queries are added to your dashboard in the format that you selected. Select and drag each result into position. Then select **Done customizing**.
+1. Select the pencil icon ![Pencil icon](media/tutorial-app-dashboards/pencil.png) on each title and use it to make the titles descriptive.
## Share dashboard 1. At the top of the dashboard, select **Share** to publish your changes.
-2. You can optionally define specific users who should have access to the dashboard. For more information, see [Share Azure dashboards by using Azure role-based access control](../../azure-portal/azure-portal-dashboard-share-access.md).
-3. Select **Publish**.
+1. You can optionally define specific users who should have access to the dashboard. For more information, see [Share Azure dashboards by using Azure role-based access control](../../azure-portal/azure-portal-dashboard-share-access.md).
+1. Select **Publish**.
## Next steps
-Now that you've learned how to create custom dashboards, have a look at the rest of the Application Insights documentation including a case study.
+In this tutorial, you learned how to create custom dashboards. Now look at the rest of the Application Insights documentation, which also includes a case study.
> [!div class="nextstepaction"] > [Deep diagnostics](../app/devops.md)
azure-monitor Autoscale Multiprofile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-multiprofile.md
+
+ Title: Autoscale with multiple profiles
+description: "Using multiple and recurring profiles in autoscale"
+++++ Last updated : 09/30/2022+++
+# Customer intent: As a user or dev ops administrator, I want to understand how set up autoscale with more than one profile so I can scale my resources with more flexibility.
++
+# Autoscale with multiple profiles
+
+Scaling your resources for a particular day of the week, or a specific date and time can reduce your costs while still providing the capacity you need when you need it.
+
+You can use multiple profiles in autoscale to scale in different ways at different times. If for example, your business isn't active on the weekend, create a recurring profile to scale in your resources on Saturdays and Sundays. If black Friday is a busy day, create a profile to automatically scale out your resources on black Friday.
+
+This article explains the different profiles in autoscale and how to use them.
+
+You can have one or more profiles in your autoscale setting.
+
+There are three types of profile:
+
+* The default profile. The default profile is created automatically and isn't dependent on a schedule. The default profile can't be deleted. The default profile is used when there are no other profiles that match the current date and time.
+* Recurring profiles. A recurring profile is valid for a specific time range and repeats for selected days of the week.
+* Fixed date and time profiles. A profile that is valid for a time range on a specific date.
+
+Each time the autoscale service runs, the profiles are evaluated in the following order:
+
+1. Fixed date profiles
+1. Recurring profiles
+1. Default profile
+
+If a profile's date and time settings match the current time, autoscale will apply that profile's rules and capacity limits. Only the first applicable profile is used.
+
+The example below shows an autoscale setting with a default profile and recurring profile.
++
+In the above example, on Monday after 6 AM, the recurring profile will be used. If the instance count is less than 3, autoscale scales to the new minimum of three. Autoscale continues to use this profile and scales based on CPU% until Monday at 6 PM. At all other times scaling will be done according to the default profile, based on the number of requests. After 6 PM on Monday, autoscale switches to the default profile. If for example, the number of instances at the time is 12, autoscale scales in to 10, which the maximum allowed for the default profile.
+
+## Multiple profiles using templates, CLI, and PowerShell
+
+When creating multiple profiles using templates, the CLI, and PowerShell, follow the guidelines below.
+
+## [ARM templates](#tab/templates)
+
+Follow the rules below when using ARM templates to create autoscale settings with multiple profiles:
+
+See the autoscale section of the [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings) for a full template reference.
+
+* Create a default profile for each recurring profile. If you have two recurring profiles, create two matching default profiles.
+* The default profile must contain a `recurrence` section that is the same as the recurring profile, with the `hours` and `minutes` elements set for the end time of the recurring profile. If you don't specify a recurrence with a start time for the default profile, the last recurrence rule will remain in effect.
+* The `name` element for the default profile is an object with the following format: `"name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Recurring profile name\"}"` where the recurring profile name is the value of the `name` element for the recurring profile. If the name isn't specified correctly, the default profile will appear as another recurring profile.
+ *The rules above don't apply for non-recurring scheduled profiles.
+
+## Add a recurring profile using AIM templates
+
+The example below shows how to create two recurring profiles. One profile for weekends between 06:00 and 19:00, Saturday and Sunday, and a second for Mondays between 04:00 and 15:00. Note the two default profiles, one for each recurring profile.
+
+Use the following command to deploy the template:
+` az deployment group create --name VMSS1-Autoscale-607 --resource-group rg-vmss1 --template-file VMSS1-autoscale.json`
+where *VMSS1-autoscale.json* is the the file containing the JSON object below.
+
+``` JSON
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Insights/autoscaleSettings",
+ "apiVersion": "2015-04-01",
+ "name": "VMSS1-Autoscale-607",
+ "location": "eastus",
+ "properties": {
+
+ "name": "VMSS1-Autoscale-607",
+ "enabled": true,
+ "targetResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "profiles": [
+ {
+ "name": "Monday profile",
+ "capacity": {
+ "minimum": "3",
+ "maximum": "20",
+ "default": "3"
+ },
+ "rules": [
+ {
+ "scaleAction": {
+ "direction": "Increase",
+ "type": "ChangeCount",
+ "value": "1",
+ "cooldown": "PT5M"
+ },
+ "metricTrigger": {
+ "metricName": "Inbound Flows",
+ "metricNamespace": "microsoft.compute/virtualmachinescalesets",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "operator": "GreaterThan",
+ "statistic": "Average",
+ "threshold": 100,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT10M",
+ "Dimensions": [],
+ "dividePerInstance": true
+ }
+ },
+ {
+ "scaleAction": {
+ "direction": "Decrease",
+ "type": "ChangeCount",
+ "value": "1",
+ "cooldown": "PT5M"
+ },
+ "metricTrigger": {
+ "metricName": "Inbound Flows",
+ "metricNamespace": "microsoft.compute/virtualmachinescalesets",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "operator": "LessThan",
+ "statistic": "Average",
+ "threshold": 60,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT10M",
+ "Dimensions": [],
+ "dividePerInstance": true
+ }
+ }
+ ],
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "E. Europe Standard Time",
+ "days": [
+ "Monday"
+ ],
+ "hours": [
+ 4
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ }
+ },
+ {
+ "name": "Weekend profile",
+ "capacity": {
+ "minimum": "1",
+ "maximum": "3",
+ "default": "1"
+ },
+ "rules": [],
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "E. Europe Standard Time",
+ "days": [
+ "Saturday",
+ "Sunday"
+ ],
+ "hours": [
+ 6
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ }
+ },
+ {
+ "name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Weekend profile\"}",
+ "capacity": {
+ "minimum": "2",
+ "maximum": "10",
+ "default": "2"
+ },
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "E. Europe Standard Time",
+ "days": [
+ "Saturday",
+ "Sunday"
+ ],
+ "hours": [
+ 19
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ },
+ "rules": [
+ {
+ "scaleAction": {
+ "direction": "Increase",
+ "type": "ChangeCount",
+ "value": "1",
+ "cooldown": "PT3M"
+ },
+ "metricTrigger": {
+ "metricName": "Percentage CPU",
+ "metricNamespace": "microsoft.compute/virtualmachinescalesets",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "operator": "GreaterThan",
+ "statistic": "Average",
+ "threshold": 50,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT1M",
+ "Dimensions": [],
+ "dividePerInstance": false
+ }
+ },
+ {
+ "scaleAction": {
+ "direction": "Decrease",
+ "type": "ChangeCount",
+ "value": "1",
+ "cooldown": "PT3M"
+ },
+ "metricTrigger": {
+ "metricName": "Percentage CPU",
+ "metricNamespace": "microsoft.compute/virtualmachinescalesets",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "operator": "LessThan",
+ "statistic": "Average",
+ "threshold": 39,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT3M",
+ "Dimensions": [],
+ "dividePerInstance": false
+ }
+ }
+ ]
+ },
+ {
+ "name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Monday profile\"}",
+ "capacity": {
+ "minimum": "2",
+ "maximum": "10",
+ "default": "2"
+ },
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "E. Europe Standard Time",
+ "days": [
+ "Monday"
+ ],
+ "hours": [
+ 15
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ },
+ "rules": [
+ {
+ "scaleAction": {
+ "direction": "Increase",
+ "type": "ChangeCount",
+ "value": "1",
+ "cooldown": "PT3M"
+ },
+ "metricTrigger": {
+ "metricName": "Percentage CPU",
+ "metricNamespace": "microsoft.compute/virtualmachinescalesets",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "operator": "GreaterThan",
+ "statistic": "Average",
+ "threshold": 50,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT1M",
+ "Dimensions": [],
+ "dividePerInstance": false
+ }
+ },
+ {
+ "scaleAction": {
+ "direction": "Decrease",
+ "type": "ChangeCount",
+ "value": "1",
+ "cooldown": "PT3M"
+ },
+ "metricTrigger": {
+ "metricName": "Percentage CPU",
+ "metricNamespace": "microsoft.compute/virtualmachinescalesets",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "operator": "LessThan",
+ "statistic": "Average",
+ "threshold": 39,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT3M",
+ "Dimensions": [],
+ "dividePerInstance": false
+ }
+ }
+ ]
+ }
+ ],
+ "notifications": [],
+ "targetResourceLocation": "eastus"
+ }
+
+ }
+ ]
+}
+
+```
+
+## [CLI](#tab/cli)
+
+The CLI can be used to create multiple profiles in your autoscale settings.
+
+See the [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest) for the full set of autoscale CLI commands.
+
+The following steps show how to create a recurring autoscale profile using the CLI.
+
+1. Create the recurring profile using `az monitor autoscale profile create`. Specify the `--start` and `--end` time and the `--recurrence`
+1. Create a scale out rule using `az monitor autoscale rule create` using `--scale out`
+1. Create a scale in rule using `az monitor autoscale rule create` using `--scale in`
+
+## Add a recurring profile using CLI
+
+The example below shows how to add a recurring autoscale profile, recurring on Thursdays between 06:00 and 22:50.
+
+``` azurecli
+
+az monitor autoscale profile create --autoscale-name VMSS1-Autoscale-607 --count 2 --max-count 10 --min-count 1 --name Thursdays --recurrence week thu --resource-group rg-vmss1 --start 06:00 --end 22:50 --timezone "Pacific Standard Time"
+
+az monitor autoscale rule create -g rg-vmss1 --autoscale-name VMSS1-Autoscale-607 --scale in 1 --condition "Percentage CPU < 25 avg 5m" --profile-name Thursdays
+
+az monitor autoscale rule create -g rg-vmss1 --autoscale-name VMSS1-Autoscale-607 --scale out 2 --condition "Percentage CPU > 50 avg 5m" --profile-name Thursdays
+```
+
+> [!NOTE]
+> The JSON for your autoscale default profile is modified by adding a recurring profile.
+> The `name` element of the default profile is changed to an object in the format: `"name": "{\"name\":\"Auto created default scale condition\",\"for\":\"recurring profile\"}"` where *recurring profile* is the profile name of your recurring profile.
+> The default profile also has a recurrence clause added to it that starts at the end time specified for the new recurring profile.
+> A new default profile is created for each recurring profile.
+
+## Updating the default profile when you have recurring profiles
+
+After you add recurring profiles, your default profile is renamed. If you have multiple recurring profiles and want to update your default profile, the update must be made to each default profile corresponding to a recurring profile.
+
+For example, if you have two recurring profiles called *Wednesdays* and *Thursdays*, you need two commands to add a rule to the default profile.
+
+```azurecli
+az monitor autoscale rule create -g rg-vmss1--autoscale-name VMSS1-Autoscale-607 --scale out 8 --condition "Percentage CPU > 52 avg 5m" --profile-name "{\"name\": \"Auto created default scale condition\", \"for\": \"Wednesdays\"}"
+
+az monitor autoscale rule create -g rg-vmss1--autoscale-name VMSS1-Autoscale-607 --scale out 8 --condition "Percentage CPU > 52 avg 5m" --profile-name "{\"name\": \"Auto created default scale condition\", \"for\": \"Thursdays\"}"
+```
+
+## [PowerShell](#tab/powershell)
+
+PowerShell can be used to create multiple profiles in your autoscale settings.
+
+See the [PowerShell Az.Monitor Reference ](https://learn.microsoft.com/powershell/module/az.monitor/#monitor) for the full set of autoscale PowerShell commands.
+
+The following steps show how to create an autoscale profile using PowerShell.
+
+1. Create rules using `New-AzAutoscaleRule`.
+1. Create profiles using `New-AzAutoscaleProfile` using the rules from the previous step.
+1. Use `Add-AzAutoscaleSetting` to apply the profiles to your autoscale setting.
+
+## Add a recurring profile using PowerShell
+
+The example below shows how to create default profile and a recurring autoscale profile, recurring on Wednesdays and Fridays between 07:00 and 19:00.
+The default profile uses the `CpuIn` and `CpuOut` Rules. The recurring profile uses the `HTTPRuleIn` and `HTTPRuleOut` rules
+
+```azurepowershell
+$ResourceGroup="rg-001"
+$TargetResourceId="/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourcegroups/rg-001/providers/Microsoft.Web/serverFarms/ScaleableAppServicePlan"
+
+$ScaleSettingName="MultipleProfiles-001"
+
+$CpuOut = New-AzAutoscaleRule -MetricName "CpuPercentage" -MetricResourceId $TargetResourceId -Operator GreaterThan -MetricStatistic Average -Threshold 50 -TimeGrain 00:01:00 -ScaleActionCooldown 00:05:00 -ScaleActionDirection Increase -ScaleActionScaleType ChangeCount -ScaleActionValue "1"
+
+$CpuIn = New-AzAutoscaleRule -MetricName "CpuPercentage" -MetricResourceId $TargetResourceId -Operator GreaterThan -MetricStatistic Average -Threshold 30 -TimeGrain 00:01:00 -ScaleActionCooldown 00:05:00 -ScaleActionDirection Decrease -ScaleActionScaleType ChangeCount -ScaleActionValue "1"
+
+$DefaultProfile = New-AzAutoscaleProfile -DefaultCapacity "1" -MaximumCapacity "10" -MinimumCapacity "1" -Rule $CpuOut,$CpuIn -Name '{"name":"Default scale condition","for":"WednesdaysFridays"}' -RecurrenceFrequency week -ScheduleDay "Wednesday","Friday" -ScheduleHour 19 -ScheduleMinute 00 -ScheduleTimeZone "Pacific Standard Time"`
+
+$HTTPRuleIn = New-AzAutoscaleRule -MetricName "HttpQueueLength" -MetricResourceId $TargetResourceId -Operator GreaterThan -MetricStatistic Average -Threshold 3 -TimeGrain 00:01:00 -ScaleActionCooldown 00:05:00 -ScaleActionDirection Decrease -ScaleActionScaleType ChangeCount -ScaleActionValue "1"
+
+$HTTPRuleOut = New-AzAutoscaleRule -MetricName "HttpQueueLength" -MetricResourceId $TargetResourceId -Operator GreaterThan -MetricStatistic Average -Threshold 10 -TimeGrain 00:01:00 -ScaleActionCooldown 00:05:00 -ScaleActionDirection Increase -ScaleActionScaleType ChangeCount -ScaleActionValue "1"
+
+$RecurringProfile=New-AzAutoscaleProfile -Name WednesdaysFridays -DefaultCapacity 2 -MaximumCapacity 12 -MinimumCapacity 2 -RecurrenceFrequency week -ScheduleDay "Wednesday","Friday" -ScheduleHour 7 -ScheduleMinute 00 -ScheduleTimeZone "Pacific Standard Time" -Rule $HTTPRuleOut, $HTTPRuleIn
+
+Add-AzAutoscaleSetting -Location "West Central US" -name $ScaleSettingName -ResourceGroup $ResourceGroup -TargetResourceId $TargetResourceId -AutoscaleProfile $DefaultProfile, $RecurringProfile
+```
+
+> [!NOTE]
+> Each recurring profile must have a corresponding default profile.
+> The `-Name` parameter of the default profile is an object in the format: `'{"name":"Default scale condition","for":"recurring profile"}'` where *recurring profile* is the profile name of the recurring profile.
+> The default profile also has a recurrence parameters which match the recurring profile but it starts at the time you want the recurring profile to end.
+> Create a distinct default profile for each recurring profile.
+
+## Updating the default profile when you have recurring profiles
+
+If you have multiple recurring profiles and want to change your default profile, the change must be made to each default profile corresponding to a recurring profile.
+
+For example, if you have two recurring profiles called *SundayProfile* and *ThursdayProfile*, you need two `New-AzAutoscaleProfile` commands to change to the default profile.
+
+```azurepowershell
++
+$DefaultProfileSundayProfile = New-AzAutoscaleProfile -DefaultCapacity "1" -MaximumCapacity "10" -MinimumCapacity "1" -Rule $CpuOut,$CpuIn -Name '{"name":"Default scale condition","for":"SundayProfile"}' -RecurrenceFrequency week -ScheduleDay "Sunday" -ScheduleHour 19 -ScheduleMinute 00 -ScheduleTimeZone "Pacific Standard Time"`
++
+$DefaultProfileThursdayProfile = New-AzAutoscaleProfile -DefaultCapacity "1" -MaximumCapacity "10" -MinimumCapacity "1" -Rule $CpuOut,$CpuIn -Name '{"name":"Default scale condition","for":"ThursdayProfile"}' -RecurrenceFrequency week -ScheduleDay "Thursday" -ScheduleHour 19 -ScheduleMinute 00 -ScheduleTimeZone "Pacific Standard Time"`
+```
+++
+## Next steps
+
+* [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest)
+* [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings)
+* [PowerShell Az.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
+* [REST API reference. Autoscale Settings](https://learn.microsoft.com/rest/api/monitor/autoscale-settings).
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
The full list of configurable fields and descriptions is available in the [Autos
For code examples, see
-* [Advanced Autoscale configuration using Resource Manager templates for virtual machine scale sets](autoscale-virtual-machine-scale-sets.md)
-* [Autoscale REST API](/rest/api/monitor/autoscalesettings)
-
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
## Horizontal vs vertical scaling Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a virtual machine scale set, scaling out means adding more virtual machines Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
The following services are supported by autoscale:
To learn more about autoscale, see the following resources: * [Azure Monitor autoscale common metrics](autoscale-common-metrics.md)
-* [Scale virtual machine scale sets](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
-* [Autoscale using Resource Manager templates for virtual machine scale sets](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
-* [Best practices for Azure Monitor autoscale](autoscale-best-practices.md)
* [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md)
-* [Autoscale REST API](/rest/api/monitor/autoscalesettings)
-* [Troubleshooting virtual machine scale sets and autoscale](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md)
-* [Troubleshooting Azure Monitor autoscale](./autoscale-troubleshoot.md)
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest)
+* [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings)
+* [PowerShell Az.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
+* [REST API reference. Autoscale Settings](https://learn.microsoft.com/rest/api/monitor/autoscale-settings).
azure-monitor Autoscale Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-virtual-machine-scale-sets.md
- Title: Advanced Autoscale using Azure Virtual Machines
-description: Uses Resource Manager and VM scale sets with multiple rules and profiles, which send email and call webhook URLs with scale actions.
----- Previously updated : 06/25/2020-----
-# Advanced autoscale configuration using Resource Manager templates for VM Scale Sets
-You can scale-in and scale-out in Virtual Machine Scale Sets based on performance metric thresholds, by a recurring schedule, or by a particular date. You can also configure email and webhook notifications for scale actions. This walkthrough shows an example of configuring all these objects using a Resource Manager template on a VM Scale Set.
-
-> [!NOTE]
-> While this walkthrough explains the steps for VM Scale Sets, the same information applies to autoscaling [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md)
-> For a simple scale in/out setting on a VM Scale Set based on a simple performance metric such as CPU, refer to the [Linux](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md) and [Windows](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md) documents
->
->
-
-## Walkthrough
-In this walkthrough, we use [Azure Resource Explorer](https://resources.azure.com/) to configure and update the autoscale setting for a scale set. Azure Resource Explorer is an easy way to manage Azure resources via Resource Manager templates. If you are new to Azure Resource Explorer tool, read [this introduction](https://azure.microsoft.com/blog/azure-resource-explorer-a-new-tool-to-discover-the-azure-api/).
-
-1. Deploy a new scale set with a basic autoscale setting. This article uses the one from the Azure QuickStart Gallery, which has a Windows scale set with a basic autoscale template. Linux scale sets work the same way.
-2. After the scale set is created, navigate to the scale set resource from Azure Resource Explorer. You see the following under Microsoft.Insights node.
-
- ![Azure Explorer](media/autoscale-virtual-machine-scale-sets/azure_explorer_navigate.png)
-
- The template execution has created a default autoscale setting with the name **'autoscalewad'**. On the right-hand side, you can view the full definition of this autoscale setting. In this case, the default autoscale setting comes with a CPU% based scale-out and scale-in rule.
-
-3. You can now add more profiles and rules based on the schedule or specific requirements. We create an autoscale setting with three profiles. To understand profiles and rules in autoscale, review [Autoscale Best Practices](autoscale-best-practices.md).
-
- | Profiles & Rules | Description |
- | | |
- | **Profile** |**Performance/metric based** |
- | Rule |Service Bus Queue Message Count > x |
- | Rule |Service Bus Queue Message Count < y |
- | Rule |CPU% > n |
- | Rule |CPU% < p |
- | **Profile** |**Weekday morning hours (no rules)** |
- | **Profile** |**Product Launch day (no rules)** |
-
-4. Here is a hypothetical scaling scenario that we use for this walk-through.
-
- * **Load based** - I'd like to scale out or in based on the load on my application hosted on my scale set.*
- * **Message Queue size** - I use a Service Bus Queue for the incoming messages to my application. I use the queue's message count and CPU% and configure a default profile to trigger a scale action if either of message count or CPU hits the threshold.\*
- * **Time of week and day** - I want a weekly recurring 'time of the day' based profile called 'Weekday Morning Hours'. Based on historical data, I know it is better to have certain number of VM instances to handle my application's load during this time.\*
- * **Special Dates** - I added a 'Product Launch Day' profile. I plan ahead for specific dates so my application is ready to handle the load due marketing announcements and when we put a new product in the application.\*
- * *The last two profiles can also have other performance metric based rules within them. In this case, I decided not to have one and instead to rely on the default performance metric based rules. Rules are optional for the recurring and date-based profiles.*
-
- Autoscale engine's prioritization of the profiles and rules is also captured in the [autoscaling best practices](autoscale-best-practices.md) article.
- For a list of common metrics for autoscale, refer [Common metrics for Autoscale](autoscale-common-metrics.md)
-
-5. Make sure you are on the **Read/Write** mode in Resource Explorer
-
- ![Autoscalewad, default autoscale setting](media/autoscale-virtual-machine-scale-sets/autoscalewad.png)
-
-6. Click Edit. **Replace** the 'profiles' element in autoscale setting with the following configuration:
-
- ![Screenshot shows the profiles element.](media/autoscale-virtual-machine-scale-sets/profiles.png)
-
- ```
- {
- "name": "Perf_Based_Scale",
- "capacity": {
- "minimum": "2",
- "maximum": "12",
- "default": "2"
- },
- "rules": [
- {
- "metricTrigger": {
- "metricName": "MessageCount",
- "metricNamespace": "",
- "metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.ServiceBus/namespaces/mySB/queues/myqueue",
- "timeGrain": "PT5M",
- "statistic": "Average",
- "timeWindow": "PT5M",
- "timeAggregation": "Average",
- "operator": "GreaterThan",
- "threshold": 10
- },
- "scaleAction": {
- "direction": "Increase",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT5M"
- }
- },
- {
- "metricTrigger": {
- "metricName": "MessageCount",
- "metricNamespace": "",
- "metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.ServiceBus/namespaces/mySB/queues/myqueue",
- "timeGrain": "PT5M",
- "statistic": "Average",
- "timeWindow": "PT5M",
- "timeAggregation": "Average",
- "operator": "LessThan",
- "threshold": 3
- },
- "scaleAction": {
- "direction": "Decrease",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT5M"
- }
- },
- {
- "metricTrigger": {
- "metricName": "Percentage CPU",
- "metricNamespace": "",
- "metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachineScaleSets/<this_vmss_name>",
- "timeGrain": "PT5M",
- "statistic": "Average",
- "timeWindow": "PT30M",
- "timeAggregation": "Average",
- "operator": "GreaterThan",
- "threshold": 85
- },
- "scaleAction": {
- "direction": "Increase",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT5M"
- }
- },
- {
- "metricTrigger": {
- "metricName": "Percentage CPU",
- "metricNamespace": "",
- "metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachineScaleSets/<this_vmss_name>",
- "timeGrain": "PT5M",
- "statistic": "Average",
- "timeWindow": "PT30M",
- "timeAggregation": "Average",
- "operator": "LessThan",
- "threshold": 60
- },
- "scaleAction": {
- "direction": "Decrease",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT5M"
- }
- }
- ]
- },
- {
- "name": "Weekday_Morning_Hours_Scale",
- "capacity": {
- "minimum": "4",
- "maximum": "12",
- "default": "4"
- },
- "rules": [],
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "Pacific Standard Time",
- "days": [
- "Monday",
- "Tuesday",
- "Wednesday",
- "Thursday",
- "Friday"
- ],
- "hours": [
- 6
- ],
- "minutes": [
- 0
- ]
- }
- }
- },
- {
- "name": "Product_Launch_Day",
- "capacity": {
- "minimum": "6",
- "maximum": "20",
- "default": "6"
- },
- "rules": [],
- "fixedDate": {
- "timeZone": "Pacific Standard Time",
- "start": "2016-06-20T00:06:00Z",
- "end": "2016-06-21T23:59:00Z"
- }
- }
- ```
- For supported fields and their values, see [Autoscale REST API documentation](/rest/api/monitor/autoscalesettings). Now your autoscale setting contains the three profiles explained previously.
-
-7. Finally, look at the Autoscale **notification** section. Autoscale notifications allow you to do three things when a scale-out or in action is successfully triggered.
- - Notify the admin and co-admins of your subscription
- - Email a set of users
- - Trigger a webhook call. When fired, this webhook sends metadata about the autoscaling condition and the scale set resource. To learn more about the payload of autoscale webhook, see [Configure Webhook & Email Notifications for Autoscale](autoscale-webhook-email.md).
-
- Add the following to the Autoscale setting replacing your **notification** element whose value is null
-
- ```
- "notifications": [
- {
- "operation": "Scale",
- "email": {
- "sendToSubscriptionAdministrator": true,
- "sendToSubscriptionCoAdministrators": false,
- "customEmails": [
- "user1@mycompany.com",
- "user2@mycompany.com"
- ]
- },
- "webhooks": [
- {
- "serviceUri": "https://foo.webhook.example.com?token=abcd1234",
- "properties": {
- "optional_key1": "optional_value1",
- "optional_key2": "optional_value2"
- }
- }
- ]
- }
- ]
-
- ```
-
- Hit **Put** button in Resource Explorer to update the autoscale setting.
-
-You have updated an autoscale setting on a VM Scale set to include multiple scale profiles and scale notifications.
-
-## Next Steps
-Use these links to learn more about autoscaling.
-
-[TroubleShoot Autoscale with Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md)
-
-[Common Metrics for Autoscale](autoscale-common-metrics.md)
-
-[Best Practices for Azure Autoscale](autoscale-best-practices.md)
-
-[Manage Autoscale using PowerShell](../powershell-samples.md#create-and-manage-autoscale-settings)
-
-[Manage Autoscale using CLI](../cli-samples.md#autoscale)
-
-[Configure Webhook & Email Notifications for Autoscale](autoscale-webhook-email.md)
-
-[Microsoft.Insights/autoscalesettings](/azure/templates/microsoft.insights/autoscalesettings) template reference
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
The following list is the eight metrics per container collected:
The following list is the cluster inventory data collected by default: -- KubePodInventory ΓÇô 1 per minute per container
+- KubePodInventory ΓÇô 1 per pod per minute
- KubeNodeInventory ΓÇô 1 per node per minute - KubeServices ΓÇô 1 per service per minute - ContainerInventory ΓÇô 1 per container per minute
azure-monitor Data Collection Transformations Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations-structure.md
The following [Bitwise operators](/azure/data-explorer/kusto/query/binoperators)
- [isempty](/azure/data-explorer/kusto/query/isemptyfunction) - [isnotempty](/azure/data-explorer/kusto/query/isnotemptyfunction) - [parse_json](/azure/data-explorer/kusto/query/parsejsonfunction)
+- [replace](https://github.com/microsoft/Kusto-Query-Language/blob/master/doc/replacefunction.md)
- [split](/azure/data-explorer/kusto/query/splitfunction) - [strcat](/azure/data-explorer/kusto/query/strcatfunction) - [strcat_delim](/azure/data-explorer/kusto/query/strcat-delimfunction)
The following [Bitwise operators](/azure/data-explorer/kusto/query/binoperators)
- [isnotnull](/azure/data-explorer/kusto/query/isnotnullfunction) - [isnull](/azure/data-explorer/kusto/query/isnullfunction)
-### Identifier quoting
-Use [Identifier quoting](/azure/data-explorer/kusto/query/schema-entities/entity-names?q=identifier#identifier-quoting) as required.
+#### Special functions
+
+##### parse_cef_dictionary
+
+Given a string containing a CEF message, `parse_cef_dictionary` parses the Extension property of the message into a dynamic key/value object. Semicolon is a reserved character that should be replaced prior to passing the raw message into the method, as shown in the example below.
+
+```kusto
+| extend cefMessage=iff(cefMessage contains_cs ";", replace(";", " ", cefMessage), cefMessage)
+| extend parsedCefDictionaryMessage =parse_cef_dictionary(cefMessage)
+| extend parsecefDictionaryExtension = parsedCefDictionaryMessage["Extension"]
+| project TimeGenerated, cefMessage, parsecefDictionaryExtension
+```
+
+### Identifier quoting
+Use [Identifier quoting](/azure/data-explorer/kusto/query/schema-entities/entity-names?q=identifier#identifier-quoting) as required.
## Next steps
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
The following table provides unique requirements for each destination including
| Destination | Requirements | |:|:| | Log Analytics workspace | The workspace doesn't need to be in the same region as the resource being monitored.|
-| Storage account | Don't use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.|
+| Storage account | It is not recommended to use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.|
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.| | Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
azure-monitor Resource Manager Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-manager-diagnostic-settings.md
resource diagnosticSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-pre
} resource blob 'Microsoft.Storage/storageAccounts/blobServices@2021-09-01' existing = {
- name:storageAccountName
+ name:'default'
+ parent:storageAccount
} resource blobSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = if (hasblob) {
resource blobSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview'
} resource table 'Microsoft.Storage/storageAccounts/tableServices@2021-09-01' existing = {
- name:storageAccountName
+ name:'default'
+ parent:storageAccount
} resource tableSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = if (hastable) {
resource tableSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview'
} resource file 'Microsoft.Storage/storageAccounts/fileServices@2021-09-01' existing = {
- name:storageAccountName
+ name:'default'
+ parent:storageAccount
} resource fileSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = if (hasfile) {
resource fileSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview'
} resource queue 'Microsoft.Storage/storageAccounts/queueServices@2021-09-01' existing = {
- name:storageAccountName
+ name:'default'
+ parent:storageAccount
}
azure-netapp-files Configure Application Volume Group Sap Hana Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-application-volume-group-sap-hana-api.md
+
+ Title: Configure application volume groups for SAP HANA REST API | Microsoft Docs
+description: Setting up your application volume groups for the SAP HANA API requires special configurations.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 08/31/2022++
+# Configure application volume groups for the SAP HANA REST API
+
+Application volume group (AVG) enables you to deploy all volumes for a single HANA host in one atomic step. The Azure portal and the Azure Resource Manager template have implemented pre-checks and recommendations for deployment in areas including throughputs and volume naming conventions. As a REST API user, those checks and recommendations are not available.
+
+Without these checks, it's important to understand the requirements for running HANA on Azure NetApp Files and the basic architecture and workflows application volume groups on which are built.
+
+SAP HANA can be installed in a single-host (scale-up) or in a multiple-host (scale-out) configuration. The volumes required for each of the HANA nodes differ for the first HANA node (single-host) and for subsequent HANA hosts (multiple-host). Since an application volume group creates the volumes for a single HANA host, the number and type of volumes created differ for the first HANA host and all subsequent HANA hosts in a multiple-host setup.
+
+Application volume groups allow you to define volume size and throughput according to your specific requirements. To ensure you can customize to your specific needs, you must only use manual QoS capacity pools. According to the SAP HANA certification, only a subset of volume features can be used for the different volumes. Since enterprise applications such as SAP HANA require application-consistent data protection, it's _not_ recommended to configure automated snapshot policies for any of the volumes. Instead consider using specific data protection applications such as [AzAcSnap](azacsnap-introduction.md) or Commvault.
+
+## Rules and restrictions
+
+Using application volume groups requires understanding the rules and restrictions:
+* A single volume group is used to create the volumes for a single HANA host only.
+* In a HANA multiple-host setup (scale-out), you should start with the volume group for the first HANA host and continue host by host.
+* HANA requires different volume types for the first HANA host and all additional multiple-hosts hosts you add.
+* Available volume types are: data, log, shared, log-backup, and data-backup.
+* The first node can have all five different volumes (one for each type).
+ * data, log and shared volumes must be provided
+ * log-backup and data-backup are optional, as you may choose to use a central share to store the backups or even use `backint` for the log-backup
+* All additional hosts in a multiple-host setup may only add one data and one log volume each.
+* For data, log and shared volumes, SAP HANA certification requires NFSv4.1 protocol.
+* Log-backup and file-backup volumes, if created optionally with the volume group of the first HANA host, may use NFSv4.1 or NFSv3 protocol.
+* Each volume must have at least one export policy defined. To install SAP, root access must be enabled.
+* Kerberos nor LDAP enablement are not supported.
+* You should follow the naming convention outlined in the following table.
+
+The following list describes all the possible volume types for application volume groups for SAP HANA.
+
+| Volume type | Creation limits | Supported Protocol | Recommended naming | Data protection recommendation |
+| - | -- | - | | -- |
+| **SAP HANA data volume** | One data volume must be created for every HANA host. | NFSv4.1 (LDAP nor Kerberos are supported) | `<SID>-data-mnt<00001>`: <ol><li> `<SID>` is the SAP system ID </li><li> `<00001>` refers to host number. For example, in a single-host configuration or for the first host in a multi-host configuration is host number is 00001. The next host is 00002. </li></ol> | No initial data protection recommendation |
+| **SAP HANA log volume** | One log volume bust must be created for every HANA host | NFSv4.1 (LDAP nor Kerberos are supported) | `<SID>-data-mnt<00001>`: <ol><li> `<SID>` is the SAP system ID </li><li> `<00001>` refers to host number. For example, in a single-host configuration or for the first host in a multi-host configuration, the host number is 00001. The next host is 00002 </li></ol> | No initial data protection recommendation |
+| **SAP HANA shared volume** | One shared volume must be created for the first host HANA host of a multiple-host setup, or for a single-host HANA installation. | NFSv4.1 (LDAP nor Kerberos are supported) | `<SID>-shared` where `<SID>` is the SAP system ID | No initial data protection recommended |
+| **SAP HANA data backup volume** | An optional volume created only for the first HANA node | NFSv4.1 (LDAP nor Kerberos are supported) | `<SID>-data-backup` where `<SID>` is the SAP system ID | No initial data protection recommended |
+| **SAP HANA log backup volume** | An optional volume created only for the first HANA node. | NFSv4.1 (LDAP nor Kerberos are supported) | `<SID>-log-backup` where `<SID>` is the SAP system ID | No initial data protection recommended |
+
+## Prepare your environment
+
+1. **Networking:** You need to decide on the networking architecture. To use Azure NetApp Files, a VNet needs to be created and within the vNet a delegated subnet where the ANF storage endpoints (IPs) will be placed. To ensure that the size of this subnet is large enough, see [Considerations about delegating a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md#considerations).
+ 1. Create a VNet.
+ 2. Create a virtual machine (VM) subnet and delegated subnet for ANF.
+1. **Storage Account and Capacity Pool:** A storage account is the entry point to consume Azure NetApp Files. At least one storage account needs to be created. Within a storage account, a capacity pool is the logical unit to create volumes. Application volume groups require a capacity pool with a manual QoS. It should be created with a size and service level that meets your HANA requirements.
+ >[!NOTE]
+ > A capacity pool can be resized at any time. For more information about changing a capacity pool, refer to [Manage a manual QoS capacity pool](manage-manual-qos-capacity-pool.md).
+ 1. Create a NetApp storage account.
+ 2. Create a manual QoS capacity pool.
+1. **Create AvSet and proximity placement group (PPG):** For production landscapes, you should create an AvSet that is manually pinned to a data center where Azure NetApp Files resources are available in proximity. The AvSet pinning ensures that VMs will not be moved on restart. The proximity placement group (PPG) needs to be assigned to the AvSet. With the help of application volume groups, the PPG can find the closest Azure NetApp Files hardware. For more information, see [Best practices about proximity placement groups](application-volume-group-considerations.md#best-practices-about-proximity-placement-groups).
+ 1. Create AvSet.
+ 2. Create PPG.
+ 3. Assign PPG to AvSet.
+1. **Manual Steps - Request AvSet pinning**: AvSet pinning is required for long term SAP HANA systems. The Microsoft capacity planning team ensures that the required VMs for SAP HANA and Azure NetApp Files resources be in proximity to the VMs that are available. VMs will not move on restart.
+ * Request pinning using [this form](https://aka.ms/HANAPINNING).
+1. **Create and start HANA DB VM:** Before you can create volumes using application volume groups, the PPG must be anchored. At least one VM must be created using the pinned AvSet. Once this VM is started, the PPG can be used to detect where the VM is running.
+ 1. Create and start the VM using the AvSet.
+
+## Understand application volume group REST API parameters
+
+The following tables describe the generic application volume group creation using the REST API, detailing selected parameters and properties required for SAP HANA application volume group creation. Constraints and typical values for SAP HANA AVG creation are also specified where applicable.
+
+### Application volume group create
+
+In a create request, use the following URI format:
+```rest
+/subscriptions/<subscriptionId/providers/Microsoft.NetApp/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.NetApp/netAppAccounts/<accountName>/volumeGroups/<volumeGroupName>?api-version=<apiVersion>
+```
+
+| URI parameter | Description | Restrictions for SAP HANA |
+| - | -- | -- |
+| `subscriptionId` | Subscription ID | None |
+| `resourceGroupName` | Resource group name | None |
+| `accountName` | NetApp account name | None |
+| `volumeGroupName` | Volume group name | None. The recommended format is `<SID>-<Name>-<ID>` <ol><li> `SID`: HANA System ID. </li><li>Name: A string of your choosing</li><li>ID: Five-digit HANA Host ID</li><ol> Example: `SH9-Testing-00003` |
+| `apiVersion` | API version | Must be `2022-03-01` or later |
+
+### Request body
+
+The request body consists of the _outer_ parameters, the group properties, and an array of volumes to be created, each with their individual outer parameters and volume properties.
+
+The following table describes the request body parameters and group level properties required to create a SAP HANA application volume group.
+
+| URI parameter | Description | Restrictions for SAP HANA |
+| - | -- | -- |
+| `Location` | Region in which to create the application volume group | None |
+| **GROUP PROPERTIES** | | |
+| `groupDescription` | Description for the group | Free-form string |
+| `applicationType` | Application type | Must be "SAP-HANA" |
+| `applicationIdentifier` | Application specific identifier string, following application naming rules | The SAP System ID, which should follow aforementioned naming rules, for example `SH9` |
+| `deploymentSpecId` | Deployment specification identifier defining the rules to deploy the specific application volume group type | Must be: ΓÇ£20542149-bfca-5618-1879-9863dc6767f1ΓÇ¥ |
+| `volumes` | Array of volumes to be created (see the next table for volume-granular details) | Volume count depends upon host configuration: <ul><li>Single-host (3-5 volumes)</li><li>**Required**: _data_, _log_ and _shared_. **Optional**: _data-backup_, _log-backup_ </li><li> Multiple-Host (two volumes)
+Required: _data_ and _log_.</li><ul> |
+
+This table describes the request body parameters and volume properties for creating a volume in a SAP HANA application volume group.
+
+| Volume-level request parameter | Description | Restrictions for SAP HANA |
+| - | -- | -- |
+| `name` | Volume name | None. Examples or recommended volume names: <ul><li> `SH9-data-mnt00001` data for Single-Host.</li><li> `SH9-log-backup` log-backup for Single-Host.</li><li> `HSR-SH9-shared` shared for HSR Secondary.</li><li> `DR-SH9-data-backup` data-backup for CRR destination </li><li> `DR2-SH9-data-backup` data-backup for CRR destination of HSR Secondary.</li></ul> |
+| `tags` | Volume tags | None, however, it may be helpful to add a tag to the HSR partner volume to identify the corresponding HSR partner volume. The Azure portal suggests the following tag for the HSR Secondary volumes: <ul><li> **Name**: `HSRPartnerStorageResourceId` </li><li> **Value:** `<Partner volume Id>` </li></ul> |
+| **Volume properties** | **Description** | **SAP HANA Value Restrictions** |
+| `creationToken` | Export path name, typically same as name above. | None. Example: `SH9-data-mnt00001` |
+| `throughputMibps` | QoS throughput | This must be between 1 Mbps and 4500 Mbps. You should set throughput based on volume type. |
+| `usageThreshhold` | Size of the volume in bytes. This must be in the 100 GiB to 100 TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. |
+| `exportPolicyRule` | Volume export policy rule | At least one export policy rule must be specified for SAP HANA. Only the following rules values can be modified for SAP HANA, the rest _must_ have their default values: <ul><li>`unixReadOnly`: should be false</li><li>`unixReadWrite`: should be true</li><li>`allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions.</li><li>`hasRootAccess`: must be true to install SAP.</li><li>`chownMode`: Specify `chown` mode.</li><li>`nfsv41`: true for data, log, and shared volumes, optionally true for data backup and log backup volumes</li><li>`nfsv3`: optionally true for data backup and log backup volumes</li><ul> All other rule values _must_ be left defaulted. |
+| `volumeSpecName` | Specifies the type of volume for the application volume group being created | SAP HANA volumes must have a value that is one of the following: <ul><li>"data"</li><li>"log"</li><li>"shared"</li><li>"data-backup"</li><li>"log-backup"</li></ul> |
+| `proximityPlacementGroup` | Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. | <ul><li>The ΓÇ£dataΓÇ¥, ΓÇ£logΓÇ¥ and ΓÇ£sharedΓÇ¥ volumes must each have a PPG specified, preferably a common PPG.</li><li>A PPG must be specified for the ΓÇ£data-backupΓÇ¥ and ΓÇ£log-backupΓÇ¥ volumes, but it will be ignored during placement.</li></ul> |
+| `subnetId` | Delegated subnet ID for Azure NetApp Files. | In a normal case where there are sufficient resources available, the number of IP addresses required in the subnet depends on the order of the application volume group created in the subscription: <ol><li> First application volume group created: the creation usually requires to 3-4 IP addresses but can require up to 5</li><li> Second application volume group created: Normally requires two IP addresses</li><li></li>Third and subsequent application volume group created: Normally, more IP addresses will not be required</ol> |
+| `capacityPoolResourceId` | ID of the capacity pool | The capacity pool must be of type manual QoS. Generally, all SAP volumes are placed in a common capacity pool, however this is not a requirement. |
+| `protocolTypes` | Protocol to use | This should be either NFSv3 or NFSv4.1 and should match the protocol specified in the Export Policy Rule described earlier in this table. |
+
+## Example API request content: application volume group creation
+
+The examples in this section illustrate the values passed in the volume group creation request for various SAP HANA configurations. The examples demonstrate best practices for naming, sizing, and values as described in the tables.
+
+In the examples below, selected placeholders are specified and should be replaced by the desired values, these include:
+1. `<SubscriptionId>`: Subscription ID. Example: `11111111-2222-3333-4444-555555555555`
+2. `<ResourceGroup>`: Resource group. Example: `TestResourceGroup`
+3. `<NtapAccount>`: NetApp account, for example: `TestAccount`
+4. `<VolumeGroupName>`: Volume group name, for example: `SH9-Test-00001`
+5. `<SubnetId>`: Subnet resource ID, for example: `/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/myRP/providers/Microsoft.Network/virtualNetworks/testvnet3/subnets/SH9_Subnet`
+6. `<CapacityPoolResourceId>`: Capacity pool resource ID, for example: `/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/myRG/providers/Microsoft.NetApp/netAppAccounts/account1/capacityPools/SH9_Pool`
+7. `<ProximityPlacementGroupResourceId>`: Proximity placement group, for example: `/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/test/providers/Microsoft.Compute/proximityPlacementGroups/SH9_PPG`
+8. `<PartnerVolumeId>`: Partner volume ID (for HSR volumes).
+9. `<ExampleJson>`: JSON Request from one of the examples in the API request tables below.
++
+>[!NOTE]
+> The following samples use jq, a tool that helps format the JSON output in a user-friendly way. If you don't have or use jq, you should omit the `| jq xxx` snippets.
+
+## Creating SAP HANA volume groups using curl
+
+SAP HANA volume groups for the following examples can be created using a sample shell script that calls the API using curl:
+
+1. Extract the subscription ID. This will automate the extraction of the subscription ID and generate the authorization token:
+ ```bash
+ subId=$(az account list | jq ".[] | select (.name == \"Pay-As-You-Go\") | .id" -r)
+ echo "Subscription ID: $subId"
+ ```
+1. Create the access token:
+ ```bash
+ response=$(az account get-access-token)
+ token=$(echo $response | jq ".accessToken" -r)
+ echo "Token: $token"
+ ```
+1. Call the REST API using curl
+ ```bash
+ echo ""
+ curl -X PUT -H "Authorization: Bearer $token" -H "Content-Type:application/json" -H "Accept:application/json" -d @<ExampleJson> https://management.azure.com/subscriptions/$subId/resourceGroups/<ResourceGroup>/providers/Microsoft.NetApp/netAppAccounts/<NtapAccount>/volumeGroups/<VolumeGroupName>?api-version=2022-03-01 | jq .
+ ```
+
+### Example 1: Deploy volumes for the first HANA host for a single-host or multi-host configuration
+To create the five volumes (data, log, shared, data-backup, log-backup) for a single-node SAP HANA system with SID `SH9` as in the example, use the following API request as shown in the JSON example.
+
+>[!NOTE]
+>You need to replace the placeholders and adapt the parameters to meet your requirements.
+
+#### Example single-host SAP HANA application volume group creation Request
+
+This example pertains to data, log, shared, data-backup, and log-backup volumes demonstrating best practices for naming, sizing, and throughputs. This example will serve as the primary volume if you're configuring an HSR pair.
+
+1. Save the JSON template as `sh9.json`:
+ ```json
+ {
+ "location": "westus",
+ "properties": {
+ "groupMetaData": {
+ "groupDescription": "Test group for SH9",
+ "applicationType": "SAP-HANA",
+ "applicationIdentifier": "SH9",
+ "deploymentSpecId": "20542149-bfca-5618-1879-9863dc6767f1"
+ },
+ "volumes": [
+ {
+ "name": "SH9-data-mnt00001",
+ "properties": {
+ "creationToken": "SH9-data-mnt00001",
+ "serviceLevel": "premium",
+ "throughputMibps": 400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "data",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ },
+ {
+ "name": "SH9-log-mnt00001",
+ "properties": {
+ "creationToken": "SH9-log-mnt00001",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "log",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ },
+ {
+ "name": "SH9-shared",
+ "properties": {
+ "creationToken": "SH9-shared",
+ "serviceLevel": "premium",
+ "throughputMibps": 64,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 1099511627776,
+ "volumeSpecName": "shared",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ },
+ {
+ "name": "SH9-data-backup",
+ "properties": {
+ "creationToken": "SH9-data-backup",
+ "serviceLevel": "premium",
+ "throughputMibps": 128,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 214748364800,
+ "volumeSpecName": "data-backup",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ },
+ {
+ "name": "SH9-log-backup",
+ "properties": {
+ "creationToken": "SH9-log-backup",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 549755813888,
+ "volumeSpecName": "log-backup",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ }
+ ]
+ }
+ }
+ ```
+1. Extract the subscription ID:
+ ```bash
+ subId=$(az account list | jq ".[] | select (.name == \"Pay-As-You-Go\") | .id" -r)
+ echo "Subscription ID: $subId"
+ ```
+1. Create the access token:
+ ```bash
+ response=$(az account get-access-token)
+ token=$(echo $response | jq ".accessToken" -r)
+ echo "Token: $token"
+ ```
+3. Call the REST API using curl
+ ```bash
+ echo ""
+ curl -X PUT -H "Authorization: Bearer $token" -H "Content-Type:application/json" -H "Accept:application/json" -d @sh9.json https://management.azure.com/subscriptions/$subId/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/volumeGroups/SAP-HANA-SH9-00001?api-version=2022-03-01 | jq .
+ ```
+1. Sample result:
+```json
+{
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/volumeGroups/SAP-HANA-SH9-00001",
+ "name": "ANF-WestUS-test/SAP-HANA-SH9-00001",
+ "type": "Microsoft.NetApp/netAppAccounts/volumeGroups",
+ "location": "westus",
+ "properties": {
+ "provisioningState": "Creating",
+ "groupMetaData": {
+ "groupDescription": "Test group for SH9",
+ "applicationType": "SAP-HANA",
+ "applicationIdentifier": "SH9",
+ "deploymentSpecId": "20542149-bfca-5618-1879-9863dc6767f1",
+ "volumesCount": 0
+ },
+ "volumes": [
+ {
+ "name": "SH9-data-mnt00001",
+ "properties": {
+ "serviceLevel": "premium",
+ "creationToken": "SH9-data-mnt00001",
+ "usageThreshold": 107374182400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Network/virtualNetworks/rg-westus-vnet/subnets/default",
+ "throughputMibps": 1,
+ "capacityPoolResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/capacityPools/avg",
+ "proximityPlacementGroup": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Compute/proximityPlacementGroups/ppg-westus-test",
+ "volumeSpecName": "data",
+ "maximumNumberOfFiles": 100000000
+ }
+ },
+ {
+ "name": "SH9-log-mnt00001",
+ "properties": {
+ "serviceLevel": "premium",
+ "creationToken": "SH9-log-mnt00001",
+ "usageThreshold": 107374182400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Network/virtualNetworks/rg-westus-vnet/subnets/default",
+ "throughputMibps": 1,
+ "capacityPoolResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/capacityPools/avg",
+ "proximityPlacementGroup": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Compute/proximityPlacementGroups/ppg-westus-test",
+ "volumeSpecName": "log",
+ "maximumNumberOfFiles": 100000000
+ }
+ },
+ {
+ "name": "SH9-shared",
+ "properties": {
+ "serviceLevel": "premium",
+ "creationToken": "SH9-shared",
+ "usageThreshold": 107374182400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Network/virtualNetworks/rg-westus-vnet/subnets/default",
+ "throughputMibps": 1,
+ "capacityPoolResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/capacityPools/avg",
+ "proximityPlacementGroup": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Compute/proximityPlacementGroups/ppg-westus-test",
+ "volumeSpecName": "shared",
+ "maximumNumberOfFiles": 100000000
+ }
+ },
+ {
+ "name": "SH9-data-backup",
+ "properties": {
+ "serviceLevel": "premium",
+ "creationToken": "SH9-data-backup",
+ "usageThreshold": 107374182400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Network/virtualNetworks/rg-westus-vnet/subnets/default",
+ "throughputMibps": 1,
+ "capacityPoolResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/capacityPools/avg",
+ "proximityPlacementGroup": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Compute/proximityPlacementGroups/ppg-westus-test",
+ "volumeSpecName": "data-backup",
+ "maximumNumberOfFiles": 100000000
+ }
+ },
+ {
+ "name": "SH9-log-backup",
+ "properties": {
+ "serviceLevel": "premium",
+ "creationToken": "SH9-log-backup",
+ "usageThreshold": 107374182400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Network/virtualNetworks/rg-westus-vnet/subnets/default",
+ "throughputMibps": 1,
+ "capacityPoolResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/capacityPools/avg",
+ "proximityPlacementGroup": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Compute/proximityPlacementGroups/ppg-westus-test",
+ "volumeSpecName": "log-backup",
+ "maximumNumberOfFiles": 100000000
+ }
+ }
+ ]
+ }
+}
+```
+
+### Example 2: Deploy volumes for an additional HANA Host for a multiple-host HANA configuration
+
+To create a multiple-host HANA system, you need to add additional hosts to the previously deployed HANA hosts. Additional hosts only require a data and log volume each host you add. In this example, a volume group is added for host number `00002`.
+
+This example is similar to the single-host system request in the earlier example, except it only contains the data and log volumes.
+
+```json
+{
+ "location": "westus",
+ "properties": {
+ "groupMetaData": {
+ "groupDescription": "Test group for SH9, host #2",
+ "applicationType": "SAP-HANA",
+ "applicationIdentifier": "SH9",
+ "deploymentSpecId": "20542149-bfca-5618-1879-9863dc6767f1"
+ },
+ "volumes": [
+ {
+ "name": "SH9-data-mnt00002",
+ "properties": {
+ "creationToken": "SH9-data-mnt00002",
+ "serviceLevel": "premium",
+ "throughputMibps": 400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "data",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ },
+ {
+ "name": "SH9-log-mnt00002",
+ "properties": {
+ "creationToken": "SH9-log-mnt00002",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "log",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ }
+ ]
+ }
+```
+
+### Example 3: Deploy volumes for a secondary HANA system using HANA system replication
+
+HANA System Replication (HSR) will be used to set up a HANA database where both databases are using the same SAP System Identifier (SID) but have their individual volumes. Typically, HSR setups are in different zones and therefore require different proximity placement groups.
+
+Volumes for a secondary database need to have different volume names. In this example, a volume is created for the secondary HANA system that has HSR with the single-host HANA system as a primary HSR (similar to what is described in example one).
+
+It's recommended that you:
+1. Use the same volume names as the primary volumes using the prefix `HSR-`.
+1. Add Azure tags to the volumes to identify the corresponding primary volumes:
+ * Name: `HSRPartnerStorageResourceId`
+ * Value: `<Partner Volume ID>`
+
+This example encompasses the creation of data, log, shared, data-backup, and log-backup volumes, demonstrating best practices for naming, sizing, and throughputs.
+
+```json
+{
+ "location": "westus",
+ "properties": {
+ "groupMetaData": {
+ "groupDescription": "HSR Secondary: Test group for SH9",
+ "applicationType": "SAP-HANA",
+ "applicationIdentifier": "SH9",
+ "deploymentSpecId": "20542149-bfca-5618-1879-9863dc6767f1"
+ },
+ "volumes": [
+ {
+ "name": "HSR-SH9-data-mnt00001",
+ "tags": {"HSRPartnerStorageResourceId": "<PartnerVolumeId>"},
+ "properties": {
+ "creationToken": "HSR-SH9-data-mnt00001",
+ "serviceLevel": "premium",
+ "throughputMibps": 400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "data",
+ "capacityPoolResourceId": <CapacityPoolResourceId2>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId2>
+ }
+ },
+ {
+ "name": "HSR-SH9-log-mnt00001",
+ "tags": {"HSRPartnerStorageResourceId": "<PartnerVolumeId>"},
+ "properties": {
+ "creationToken": "HSR-SH9-log-mnt00001",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "log",
+ "capacityPoolResourceId": <CapacityPoolResourceId2>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId2>
+ }
+ },
+ {
+ "name": "HSR-SH9-shared",
+ "tags": {"HSRPartnerStorageResourceId": "<PartnerVolumeId>"},
+ "properties": {
+ "creationToken": "HSR-SH9-shared",
+ "serviceLevel": "premium",
+ "throughputMibps": 64,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 1099511627776,
+ "volumeSpecName": "shared",
+ "capacityPoolResourceId": <CapacityPoolResourceId2>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId2>
+ }
+ },
+ {
+ "name": "HSR-SH9-data-backup",
+ "tags": {"HSRPartnerStorageResourceId": "<PartnerVolumeId>"},
+ "properties": {
+ "creationToken": "HSR-SH9-data-backup",
+ "serviceLevel": "premium",
+ "throughputMibps": 128,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 214748364800,
+ "volumeSpecName": "data-backup",
+ "capacityPoolResourceId": <CapacityPoolResourceId2>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId2>
+ }
+ },
+ {
+ "name": "HSR-SH9-log-backup",
+ "tags": {"HSRPartnerStorageResourceId": "<PartnerVolumeId>"},
+ "properties": {
+ "creationToken": "HSR-SH9-log-backup",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 549755813888,
+ "volumeSpecName": "log-backup",
+ "capacityPoolResourceId": <CapacityPoolResourceId2>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId2>
+ }
+ }
+ ]
+ }
+}
+```
+
+### Example 4: Deploy volumes for a secondary HANA system using HANA system replication
+
+Cross-region replication is one way to set up a disaster recovery configuration for HANA, where the volumes of the HANA database in the DR-region are replicated on the storage side using cross-region replication in contrast to HSR, which replicates at the application level where it requires to have the HANA VMs deployed and running. Refer to the documentation (link) to understand which volumes require CRR replication. Refer to [Add volumes for an SAP HANA system as a DR system using cross-region replication](application-volume-group-disaster-recovery.md) to understand for which volumes in cross-region replication relations are required (data, shared, log-backup), not allowed (log), or optional (data-backup).
+
+In this example, the following placeholders are specified and should be replaced by values specific to your configuration:
+1. `<CapacityPoolResourceId3>`: DR capacity pool resource ID, for example:
+`/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/myRG/providers/Microsoft.NetApp/netAppAccounts/account1/capacityPools/DR_SH9_HSR_Pool`
+2. `<ProximityPlacementGroupResourceId3>`: DR proximity placement group, for example:`/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/test/providers/Microsoft.Compute/proximityPlacementGroups/DR_SH9_PPG`
+3. `<SrcVolumeId_data>`, `<SrcVolumeId_shared>`, `<SrcVolumeId_data-backup>`, `<SrcVolumeId_log-backup>`: cross-region replication source volume IDs for the data, log, shared, and log-backup cross-region replication destination volumes.
+
+```json
+{
+ "location": "eastus",
+ "properties": {
+ "groupMetaData": {
+ "groupDescription": "Data Protection: Test group for SH9",
+ "applicationType": "SAP-HANA",
+ "applicationIdentifier": "SH9",
+ "deploymentSpecId": "20542149-bfca-5618-1879-9863dc6767f1"
+ },
+ "volumes": [
+ {
+ "name": "DR-SH9-data-mnt00001",
+ "properties": {
+ "creationToken": "DR-SH9-data-mnt00001",
+ "serviceLevel": "premium",
+ "throughputMibps": 400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "data",
+ "capacityPoolResourceId": <CapacityPoolResourceId3>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId3>,
+ "volumeType": "DataProtection",
+ "dataProtection": {
+ "replication": {
+ "endpointType": "dst",
+ "remoteVolumeResourceId": <SrcVolumeId_data>,
+ "replicationSchedule": "hourly"
+ }
+ }
+ }
+ },
+ {
+ "name": "DR-SH9-log-mnt00001",
+ "properties": {
+ "creationToken": "DR-SH9-log-mnt00001",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "log",
+ "capacityPoolResourceId": <CapacityPoolResourceId3>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId3>,
+ }
+ },
+ {
+ "name": "DR-SH9-shared",
+ "properties": {
+ "creationToken": "DR-SH9-shared",
+ "serviceLevel": "premium",
+ "throughputMibps": 64,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 1099511627776,
+ "volumeSpecName": "shared",
+ "capacityPoolResourceId": <CapacityPoolResourceId3>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId3>,
+ "volumeType": "DataProtection",
+ "dataProtection": {
+ "replication": {
+ "endpointType": "dst",
+ "remoteVolumeResourceId": <SrcVolumeId_shared>,
+ "replicationSchedule": "hourly"
+ }
+ }
+ }
+ },
+ {
+ "name": "DR-SH9-data-backup",
+ "properties": {
+ "creationToken": "DR-SH9-data-backup",
+ "serviceLevel": "premium",
+ "throughputMibps": 128,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 214748364800,
+ "volumeSpecName": "data-backup",
+ "capacityPoolResourceId": <CapacityPoolResourceId3>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId3>,
+ "volumeType": "DataProtection",
+ "dataProtection": {
+ "replication": {
+ "endpointType": "dst",
+ "remoteVolumeResourceId": <SrcVolumeId_data-backup>,
+ "replicationSchedule": "daily"
+ }
+ }
+ }
+ },
+ {
+ "name": "DR-SH9-log-backup",
+ "properties": {
+ "creationToken": "DR-SH9-log-backup",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 549755813888,
+ "volumeSpecName": "log-backup",
+ "capacityPoolResourceId": <CapacityPoolResourceId3>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId3>,
+ "volumeType": "DataProtection",
+ "dataProtection": {
+ "replication": {
+ "endpointType": "dst",
+ "remoteVolumeResourceId": <SrcVolumeId_log-backup>,
+ "replicationSchedule": "_10minutely"
+ }
+ }
+ }
+ }
+ ]
+ }
+}
+```
+
+## Next steps
+
+* [Deploy the first SAP HANA host using application volume group for SAP HANA](application-volume-group-deploy-first-host.md)
+* [Add hosts to a multiple-host SAP HANA system using application volume group for SAP HANA](application-volume-group-add-hosts.md)
+* [Add volumes for an SAP HANA system as a secondary database in HSR](application-volume-group-add-volume-secondary.md)
+* [Add volumes for an SAP HANA system as a DR system using cross-region replication](application-volume-group-disaster-recovery.md).
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory
`KerberosEncryptionType` is a multivalued parameter that supports AES-128 and AES-256 values.
+ For more information, refer to the [Set-ADUser documentation](/powershell/module/activedirectory/set-aduser).
+ * If you have a requirement to enable and disable certain Kerberos encryption types for Active Directory computer accounts for domain-joined Windows hosts used with Azure NetApp Files, you must use the Group Policy `Network Security: Configure Encryption types allowed for Kerberos`. Do not set the registry key `HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\System\Kerberos\Parameters\SupportedEncryptionTypes`. Doing this will break Kerberos authentication with Azure NetApp Files for the Windows host where this registry key was manually set.
Several features of Azure NetApp Files require that you have an Active Directory
For more information, refer to [Network security: Configure encryption types allowed for Kerberos](/windows/security/threat-protection/security-policy-settings/network-security-configure-encryption-types-allowed-for-kerberos) or [Windows Configurations for Kerberos Supported Encryption Types](/archive/blogs/openspecification/windows-configurations-for-kerberos-supported-encryption-type)
-* For more information, refer to the [Set-ADUser documentation](/powershell/module/activedirectory/set-aduser).
- ## Create an Active Directory connection 1. From your NetApp account, select **Active Directory connections**, then select **Join**.
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md
na Previously updated : 09/14/2022 Last updated : 09/28/2022
Azure NetApp Files volume replication is supported between various [Azure region
| US Government | US Gov Arizona | US Gov Virginia | >[!NOTE]
->There may be a discrepancy in the size of snapshots between source and destination. This discrepancy is expected. To learn more about snapshots, refer to [How Azure NetApp Files snapshots work](snapshots-introduction.md).
+>There may be a discrepancy in the size and number of snapshots between source and destination. This discrepancy is expected. Snapshot policies and replication schedules will influence the number of snapshots. Snapshot policies and replication schedules, combined with the amount of data changed between snapshots, will influence the size of snapshots. To learn more about snapshots, refer to [How Azure NetApp Files snapshots work](snapshots-introduction.md).
## Service-level objectives
azure-netapp-files Performance Linux Nfs Read Ahead https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-nfs-read-ahead.md
na Previously updated : 02/02/2022 Last updated : 09/29/2022 # Linux NFS read-ahead best practices for Azure NetApp Files
To persistently set read-ahead for NFS mounts, `udev` rules can be written as fo
1. Create and test `/etc/udev/rules.d/99-nfs.rules`:
- `SUBSYSTEM=="bdi", ACTION=="add", PROGRAM="/bin/awk -v bdi=$kernel 'BEGIN{ret=1} {if ($4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes", ATTR{read_ahead_kb}="15380"`
+ `SUBSYSTEM=="bdi", ACTION=="add", PROGRAM="<absolute_path>/awk -v bdi=$kernel 'BEGIN{ret=1} {if ($4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes", ATTR{read_ahead_kb}="15380"`
2. Apply the `udev` rule:
azure-resource-manager Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md
Title: Create Bicep files by using Visual Studio Code description: Describes how to create Bicep files by using Visual Studio Code Previously updated : 09/14/2022 Last updated : 09/29/2022 # Create Bicep files by using Visual Studio Code
This command creates a parameter file in the same folder as the Bicep file. The
The `insert resource` command adds a resource declaration in the Bicep file by providing the resource ID of an existing resource. After you select **Insert Resource**, enter the resource ID in the command palette. It takes a few moments to insert the resource.
-You can find the resource ID from the Azure portal, or by using:
+You can find the resource ID by using one of these methods:
-# [CLI](#tab/CLI)
+- Use [Azure Resource extension for VSCode](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureresourcegroups).
-```azurecli
-az resource list
-```
+ :::image type="content" source="./media/visual-studio-code/visual-studio-code-azure-resources-extension.png" alt-text="Screenshot of Visual Studio Code Azure Resources extension.":::
-# [PowerShell](#tab/PowerShell)
+- Use the [Azure portal](https://portal.azure.com).
+- Use Azure CLI or Azure PowerShell:
-```azurepowershell
-Get-AzResource
-```
+ # [CLI](#tab/CLI)
-
+ ```azurecli
+ az resource list
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ Get-AzResource
+ ```
+
+
Similar to exporting templates, the process tries to create a usable resource. However, most of the inserted resources require some modification before they can be used to deploy Azure resources.
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
Previously updated : 08/22/2022 Last updated : 09/29/2022 # Quickstart: Create and publish an Azure Managed Application definition
az storage account create \
--location eastus \ --sku Standard_LRS \ --kind StorageV2
+```
+
+After you create the storage account, add the role assignment _Storage Blob Data Contributor_ to the storage account scope. Assign access to your Azure Active Directory user account. Depending on your access level in Azure, you might need other permissions assigned by your administrator. For more information, see [Assign an Azure role for access to blob data](../../storage/blobs/assign-azure-role-data-access.md).
+
+After you add the role to the storage account, it takes a few minutes to become active in Azure. You can then use the parameter `--auth-mode login` in the commands to create the container and upload the file.
+```azurecli-interactive
az storage container create \ --account-name demostorageaccount \ --name appcontainer \
+ --auth-mode login \
--public-access blob az storage blob upload \ --account-name demostorageaccount \ --container-name appcontainer \
+ --auth-mode login \
--name "app.zip" \ --file "./app.zip"- ```
-When you run the Azure CLI command to create the container, you might see a warning message about credentials, but the command will be successful. The reason is because although you own the storage account you assign roles like _Storage Blob Data Contributor_ to the storage account scope. For more information, see [Assign an Azure role for access to blob data](../../storage/blobs/assign-azure-role-data-access.md). After you add a role, it takes a few minutes to become active in Azure. You can then append the command with `--auth-mode login` and resolve the warning message.
+For more information about storage authentication, see [Choose how to authorize access to blob data with Azure CLI](../../storage/blobs/authorize-data-operations-cli.md).
In this section you'll get identity information from Azure Active Directory, cre
### Create an Azure Active Directory user group or application
-The next step is to select a user group, user, or application for managing the resources for the customer. This identity has permissions on the managed resource group according to the role that is assigned. The role can be any Azure built-in role like Owner or Contributor. To create a new Active Directory user group, see [Create a group and add members in Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
+The next step is to select a user group, user, or application for managing the resources for the customer. This identity has permissions on the managed resource group according to the role that's assigned. The role can be any Azure built-in role like Owner or Contributor. To create a new Active Directory user group, see [Create a group and add members in Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
-You need the object ID of the user group to use for managing the resources.
+This example uses a user group, so you need the object ID of the user group to use for managing the resources. Replace the placeholder `mygroup` with your group's name.
# [PowerShell](#tab/azure-powershell)
az group create --name appDefinitionGroup --location westcentralus
Create the managed application definition resource. In the `Name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name.
+The `blob` command that's run from Azure PowerShell or Azure CLI creates a variable that's used to get the URL for the package _.zip_ file. That variable is used in the command that creates the managed application definition.
+ # [PowerShell](#tab/azure-powershell) ```azurepowershell-interactive
New-AzManagedApplicationDefinition `
blob=$(az storage blob url \ --account-name demostorageaccount \ --container-name appcontainer \
+ --auth-mode login \
--name app.zip --output tsv) az managedapp definition create \
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
See [Monitoring Azure Video Indexer](monitor-video-indexer.md) for details on co
## Metrics
-Azure Video Indexer currently does not support any monitoring on metrics.
+Azure Video Indexer currently does not support any metrics monitoring.
+ <!-- REQUIRED if you support Metrics. If you don't, keep the section but call that out. Some services are only onboarded to logs. <!-- Please keep headings in this order -->
For more information, see a list of [all platform metrics supported in Azure Mon
## Metric dimensions
-Azure Video Indexer currently does not support any monitoring on metrics.
+Azure Video Indexer currently does not support any metrics monitoring.
<!-- REQUIRED. Please keep headings in this order --> <!-- If you have metrics with dimensions, outline it here. If you have no dimensions, say so. Questions email azmondocs@microsoft.com -->
Resource Provider and Type: [Microsoft.VideoIndexer/accounts](/azure/azure-monit
| Category | Display Name | Additional information | |:|:-|| | VIAudit | Azure Video Indexer Audit Logs | Logs are produced from both the Video Indexer portal and the REST API. |
-| IndexingLogs | Indexing Logs | Azure Video Indexer indexing logs to monitor all files uploads, indexing jobs and Re-indexing when needed. |
+| IndexingLogs | Indexing Logs | Azure Video Indexer indexing logs to monitor all files uploads, indexing and reindexing jobs. |
<!-- --**END Examples** - -->
NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays
| Table | Description | Additional information | |:|:-||
-| [VIAudit](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer)<!-- (S/azure/azure-monitor/reference/tables/viaudit)--> | <!-- description copied from previous link --> Events produced using Azure Video Indexer [portal](https://aka.ms/VIportal) or [REST API](https://aka.ms/vi-dev-portal). | |
-|VIIndexing| Events produced using Azure Video Indexer [upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [Re-indexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs. |
+| [VIAudit](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer)<!-- (S/azure/azure-monitor/reference/tables/viaudit)--> | <!-- description copied from previous link --> Events produced using the Azure Video Indexer [website](https://aka.ms/VIportal) or the [REST API portal](https://aka.ms/vi-dev-portal). | |
+|VIIndexing| Events produced using the Azure Video Indexer [upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [re-index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs. |
<!--| [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics) | <!-- description copied from previous link --> <!--Metric data emitted by Azure services that measure their health and performance. | *TODO other important information about this type | | etc. | | |
backup Backup Azure Arm Vms Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-vms-prepare.md
Title: Back up Azure VMs in a Recovery Services vault description: Describes how to back up Azure VMs in a Recovery Services vault using the Azure Backup Previously updated : 06/01/2021 Last updated : 09/29/2022+++ # Back up Azure VMs in a Recovery Services vault
Modify the storage replication type as follows:
To apply a backup policy to your Azure VMs, follow these steps:
-1. Navigate to Backup center and click **+Backup** from the **Overview** tab.
+1. Go to the Backup center and click **+Backup** from the **Overview** tab.
![Backup button](./media/backup-azure-arm-vms-prepare/backup-button.png)
If you selected to create a new backup policy, fill in the policy settings.
2. In **Backup schedule**, specify when backups should be taken. You can take daily or weekly backups for Azure VMs. 3. In **Instant Restore**, specify how long you want to retain snapshots locally for instant restore. * When you restore, backed up VM disks are copied from storage, across the network to the recovery storage location. With instant restore, you can leverage locally stored snapshots taken during a backup job, without waiting for backup data to be transferred to the vault.
- * You can retain snapshots for instant restore for between one to five days. Two days is the default setting.
+ * You can retain snapshots for instant restore for between one to five days. The default setting is *two days*.
4. In **Retention range**, specify how long you want to keep your daily or weekly backup points. 5. In **Retention of monthly backup point** and **Retention of yearly backup point**, specify whether you want to keep a monthly or yearly backup of your daily or weekly backups. 6. Select **OK** to save the policy.
If you selected to create a new backup policy, fill in the policy settings.
![New backup policy](./media/backup-azure-arm-vms-prepare/new-policy.png) > [!NOTE]
- > Azure Backup doesn't support automatic clock adjustment for daylight-saving changes for Azure VM backups. As time changes occur, modify backup policies manually as required.
+>- Azure Backup doesn't support automatic clock adjustment for daylight-saving changes for Azure VM backups. As time changes occur, modify backup policies manually as required.
+>- If you want hourly backups, then you can configure *Enhanced backup policy*. For more information, see [Back up an Azure VM using Enhanced policy](backup-azure-vms-enhanced-policy.md#create-an-enhanced-policy-and-configure-vm-backup).
## Trigger the initial backup The initial backup will run in accordance with the schedule, but you can run it immediately as follows:
-1. Navigate to Backup center and select the **Backup Instances** menu item.
+1. Go to the Backup center and select the **Backup Instances** menu item.
1. Select **Azure Virtual machines** as the **Datasource type**. Then search for the VM that you have configured for backup. 1. Right-click the relevant row or select the more icon (…), and then click **Backup Now**. 1. In **Backup Now**, use the calendar control to select the last day that the recovery point should be retained. Then select **OK**.
center-sap-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/overview.md
You can use ACSS to deploy the following types of SAP systems:
For existing SAP systems that run on Azure, there's a simple registration experience. You can register the following types of existing SAP systems that run on Azure: - An SAP system that runs on SAP NetWeaver or ABAP stack-- SAP systems that run on SUSE and RHEL Linux operating systems
+- SAP systems that run on Windows, SUSE and RHEL Linux operating systems
- SAP systems that run on HANA, DB2, SQL Server, Oracle, Max DB, or SAP ASE databases ACSS brings services, tools and frameworks together to provide an end-to-end unified experience for deployment and management of SAP workloads on Azure, creating the foundation for you to build innovative solutions for your unique requirements.
After you create a VIS, you can:
- Start and stop the SAP application tier. - Get quality checks and insights about your SAP system. - Monitor your Azure infrastructure metrics for your SAP system resources. For example, the CPU percentage used for ASCS and Application Server VMs, or disk input/output operations per second (IOPS).
+- Analyze the cost of running your SAP System on Azure [VMs, Disks, Loadbalancers]
## Next steps
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 9/19/2022 Last updated : 9/29/2022
The following tables show the Microsoft Security Response Center (MSRC) updates
## September 2022 Guest OS
->[!NOTE]
-
->The September Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the September Guest OS. This list is subject to change.
-
-| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
-| | | | | |
-| Rel 22-09 | [5017315] | Latest Cumulative Update(LCU) | 6.48 | Sep 13, 2022 |
-| Rel 22-09 | [5016618] | IE Cumulative Updates | 2.128, 3.115, 4.108 | Aug 9, 2022 |
-| Rel 22-09 | [5017316] | Latest Cumulative Update(LCU) | 7.16 | Sep 13, 2022 |
-| Rel 22-09 | [5017305] | Latest Cumulative Update(LCU) | 5.72 | Sep 13, 2022 |
-| Rel 22-09 | [5013641] | .NET Framework 3.5 and 4.7.2 Cumulative Update | 6.48 | May 10, 2022 |
-| Rel 22-09 | [5017397] | Servicing Stack Update | 2.128 | Sep 13, 2022 |
-| Rel 22-09 | [5017361] | September '22 Rollup | 2.128 | Sep 13, 2022 |
-| Rel 22-09 | [5013637] | .NET Framework 3.5 Security and Quality Rollup LKG | 2.128 | Sep 13, 2022 |
-| Rel 22-09 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 2.128 | May 10, 2022 |
-| Rel 22-09 | [5016263] | Servicing Stack Update | 3.115 | July 12, 2022 |
-| Rel 22-09 | [5017370] | September '22 Rollup | 3.115 | Sep 13, 2022 |
-| Rel 22-09 | [5013635] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.115 | Sep 13, 2022 |
-| Rel 22-09 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 3.115 | May 10, 2022 |
-| Rel 22-09 | [5017398] | Servicing Stack Update | 4.108 | Sep 13, 2022 |
-| Rel 22-09 | [5017367] | Monthly Rollup | 4.108 | Sep 13, 2022 |
-| Rel 22-09 | [5013638] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.108 | Jun 14, 2022 |
-| Rel 22-09 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 4.108 | May 10, 2022 |
-| Rel 22-09 | [4578013] | OOB Standalone Security Update | 4.108 | Aug 19, 2020 |
-| Rel 22-09 | [5017396] | Servicing Stack Update | 5.72 | Sep 13, 2022 |
-| Rel 22-09 | [4494175] | Microcode | 5.72 | Sep 1, 2020 |
-| Rel 22-09 | 5015896 | Servicing Stack Update | 6.48 | Sep 1, 2020 |
-| Rel 22-09 | [5013626] | .NET Framework 4.8 Security and Quality Rollup LKG | 6.48 | May 10, 2022 |
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 22-09 | [5017315] | Latest Cumulative Update(LCU) | [6.48] | Sep 13, 2022 |
+| Rel 22-09 | [5016618] | IE Cumulative Updates | [2.128], [3.115], [4.108] | Aug 9, 2022 |
+| Rel 22-09 | [5017316] | Latest Cumulative Update(LCU) | [7.16] | Sep 13, 2022 |
+| Rel 22-09 | [5017305] | Latest Cumulative Update(LCU) | [5.72] | Sep 13, 2022 |
+| Rel 22-09 | [5013641] | .NET Framework 3.5 and 4.7.2 Cumulative Update | [6.48] | May 10, 2022 |
+| Rel 22-09 | [5017397] | Servicing Stack Update | [2.128] | Sep 13, 2022 |
+| Rel 22-09 | [5017361] | September '22 Rollup | [2.128] | Sep 13, 2022 |
+| Rel 22-09 | [5013637] | .NET Framework 3.5 Security and Quality Rollup LKG | [2.128] | Sep 13, 2022 |
+| Rel 22-09 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup LKG | [2.128] | May 10, 2022 |
+| Rel 22-09 | [5016263] | Servicing Stack Update | [3.115] | July 12, 2022 |
+| Rel 22-09 | [5017370] | September '22 Rollup | [3.115] | Sep 13, 2022 |
+| Rel 22-09 | [5013635] | .NET Framework 3.5 Security and Quality Rollup LKG | [3.115] | Sep 13, 2022 |
+| Rel 22-09 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup LKG | [3.115] | May 10, 2022 |
+| Rel 22-09 | [5017398] | Servicing Stack Update | [4.108] | Sep 13, 2022 |
+| Rel 22-09 | [5017367] | Monthly Rollup | [4.108] | Sep 13, 2022 |
+| Rel 22-09 | [5013638] | .NET Framework 3.5 Security and Quality Rollup LKG | [4.108] | Jun 14, 2022 |
+| Rel 22-09 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup LKG | [4.108] | May 10, 2022 |
+| Rel 22-09 | [4578013] | OOB Standalone Security Update | [4.108] | Aug 19, 2020 |
+| Rel 22-09 | [5017396] | Servicing Stack Update | [5.72] | Sep 13, 2022 |
+| Rel 22-09 | [4494175] | Microcode | [5.72] | Sep 1, 2020 |
+| Rel 22-09 | 5015896 | Servicing Stack Update | [6.48] | Sep 1, 2020 |
+| Rel 22-09 | [5013626] | .NET Framework 4.8 Security and Quality Rollup LKG | [6.48] | May 10, 2022 |
[5017315]: https://support.microsoft.com/kb/5017315 [5016618]: https://support.microsoft.com/kb/5016618
The following tables show the Microsoft Security Response Center (MSRC) updates
[4494175]: https://support.microsoft.com/kb/4494175 [5015896]: https://support.microsoft.com/kb/5015896 [5013626]: https://support.microsoft.com/kb/5013626
+[2.128]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.115]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.108]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.72]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.48]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.16]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## August 2022 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 9/02/2022 Last updated : 9/29/2022 # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates +
+###### **September 29, 2022**
+The September Guest OS has released.
+ ###### **September 2, 2022** The August Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.16_202209-01 | September 29, 2022 | Post 7.18 |
| WA-GUEST-OS-7.15_202208-01 | September 2, 2022 | Post 7.17 |
-| WA-GUEST-OS-7.14_202207-01 | August 3, 2022 | Post 7.16 |
-|~~WA-GUEST-OS-7.13_202206-01~| July 11, 2022 | September 2, 2022 |
+|~~WA-GUEST-OS-7.14_202207-01~~| August 3, 2022 | September 29, 2022 |
+|~~WA-GUEST-OS-7.13_202206-01~~| July 11, 2022 | September 2, 2022 |
|~~WA-GUEST-OS-7.12_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-7.11_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-7.10_202203-01~~| March 19, 2022 | May 26, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.48_202209-01 | September 29, 2022 | Post 6.50 |
| WA-GUEST-OS-6.47_202208-01 | September 2, 2022 | Post 6.49 |
-| WA-GUEST-OS-6.46_202207-01 | August 3, 2022 | Post 6.48 |
-|~~WA-GUEST-OS-6.45_202206-01~| July 11, 2022 | September 2, 2022 |
+|~~WA-GUEST-OS-6.46_202207-01~~| August 3, 2022 | September 29, 2022 |
+|~~WA-GUEST-OS-6.45_202206-01~~| July 11, 2022 | September 2, 2022 |
|~~WA-GUEST-OS-6.44_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-6.43_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-6.42_202203-01~~| March 19, 2022 | May 26, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.72_202209-01 | September 29, 2022 | Post 5.74 |
| WA-GUEST-OS-5.71_202208-01 | September 2, 2022 | Post 5.73 |
-| WA-GUEST-OS-5.70_202207-01 | August 3, 2022 | Post 5.72 |
+|~~WA-GUEST-OS-5.70_202207-01~~| August 3, 2022 | September 29, 2022 |
|~~WA-GUEST-OS-5.69_202206-01~~| July 11, 2022 | September 2, 2022 | |~~WA-GUEST-OS-5.68_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-5.67_202204-01~~| April 30, 2022 | July 11, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.108_202209-01 | September 29, 2022 | Post 4.110 |
| WA-GUEST-OS-4.107_202208-01 | September 2, 2022 | Post 4.109 |
-| WA-GUEST-OS-4.106_202207-02 | August 3, 2022 | Post 4.108 |
+|~~WA-GUEST-OS-4.106_202207-02~~| August 3, 2022 | September 29, 2022 |
|~~WA-GUEST-OS-4.105_202206-02~~| July 11, 2022 | September 2, 2022 | |~~WA-GUEST-OS-4.103_202205-01~~| May 26, 2022 | August 2, 2022 | |~~WA-GUEST-OS-4.102_202204-01~~| April 30, 2022 | July 11, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.115_202209-01 | September 29, 2022 | Post 3.117 |
| WA-GUEST-OS-3.114_202208-01 | September 2, 2022 | Post 3.116 |
-| WA-GUEST-OS-3.113_202207-02 | August 3, 2022 | Post 3.115 |
+|~~WA-GUEST-OS-3.113_202207-02~~| August 3, 2022 | September 29, 2022 |
|~~WA-GUEST-OS-3.112_202206-02~~| July 11, 2022 | September 2, 2022 | |~~WA-GUEST-OS-3.110_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-3.109_202204-01~~| April 30, 2022 | July 11, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.128_202209-01 | September 29, 2022 | Post 2.130 |
| WA-GUEST-OS-2.127_202208-01 | September 2, 2022 | Post 2.129 |
-| WA-GUEST-OS-2.126_202207-02 | August 3, 2022 | Post 2.128 |
+|~~WA-GUEST-OS-2.126_202207-02~~| August 3, 2022 | September 29, 2022 |
|~~WA-GUEST-OS-2.125_202206-02~~| July 11, 2022 | September 2, 2022 | |~~WA-GUEST-OS-2.123_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-2.122_202204-01~~| April 30, 2022 | July 11, 2022 |
cognitive-services Get Started Build Detector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
In your web browser, navigate to the [Custom Vision web page](https://customvisi
![The new project dialog box has fields for name, description, and domains.](./media/get-started-build-detector/new-project.png)
-1. Enter a name and a description for the project. Then select a Resource Group. If your signed-in account is associated with an Azure account, the Resource Group dropdown will display all of your Azure Resource Groups that include a Custom Vision Service Resource.
+1. Enter a name and a description for the project. Then select your Custom Vision Training Resource. If your signed-in account is associated with an Azure account, the Resource dropdown will display all of your compatible Azure resources.
> [!NOTE]
- > If no resource group is available, please confirm that you have logged into [customvision.ai](https://customvision.ai) with the same account as you used to log into the [Azure portal](https://portal.azure.com/). Also, please confirm you have selected the same "Directory" in the Custom Vision website as the directory in the Azure portal where your Custom Vision resources are located. In both sites, you may select your directory from the drop down account menu at the top right corner of the screen.
+ > If no resource is available, please confirm that you have logged into [customvision.ai](https://customvision.ai) with the same account as you used to log into the [Azure portal](https://portal.azure.com/). Also, please confirm you have selected the same "Directory" in the Custom Vision website as the directory in the Azure portal where your Custom Vision resources are located. In both sites, you may select your directory from the drop down account menu at the top right corner of the screen.
+1. Under
1. Select __Object Detection__ under __Project Types__. 1. Next, select one of the available domains. Each domain optimizes the detector for specific types of images, as described in the following table. You can change the domain later if you want to.
cognitive-services Getting Started Build A Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier.md
In your web browser, navigate to the [Custom Vision web page](https://customvisi
![The new project dialog box has fields for name, description, and domains.](./media/getting-started-build-a-classifier/new-project.png)
-1. Enter a name and a description for the project. Then select a Resource Group. If your signed-in account is associated with an Azure account, the Resource Group dropdown will display all of your Azure Resource Groups that include a Custom Vision Service Resource.
+1. Enter a name and a description for the project. Then select your Custom Vision Training Resource. If your signed-in account is associated with an Azure account, the Resource dropdown will display all of your compatible Azure resources.
> [!NOTE]
- > If no resource group is available, please confirm that you have logged into [customvision.ai](https://customvision.ai) with the same account as you used to log into the [Azure portal](https://portal.azure.com/). Also, please confirm you have selected the same "Directory" in the Custom Vision web portal as the directory in the Azure portal where your Custom Vision resources are located. In both sites, you may select your directory from the drop down account menu at the top right corner of the screen.
+ > If no resource is available, please confirm that you have logged into [customvision.ai](https://customvision.ai) with the same account as you used to log into the [Azure portal](https://portal.azure.com/). Also, please confirm you have selected the same "Directory" in the Custom Vision website as the directory in the Azure portal where your Custom Vision resources are located. In both sites, you may select your directory from the drop down account menu at the top right corner of the screen.
1. Select __Classification__ under __Project Types__. Then, under __Classification Types__, choose either **Multilabel** or **Multiclass**, depending on your use case. Multilabel classification applies any number of your tags to an image (zero or more), while multiclass classification sorts images into single categories (every image you submit will be sorted into the most likely tag). You'll be able to change the classification type later if you want to.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Each prebuilt neural voice supports a specific language and dialect, identified
> [!IMPORTANT] > Pricing varies for Prebuilt Neural Voice (see *Neural* on the pricing page) and Custom Neural Voice (see *Custom Neural* on the pricing page). For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page.
-Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing.
+Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
Please note that the following neural voices are retired.
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/long-audio-api.md
We support flexible audio output formats. You can generate audio outputs per par
> [!NOTE] > The default audio format is riff-16khz-16bit-mono-pcm.
+>
+> The sample rate for long audio voices is 24kHz, not 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
* riff-8khz-16bit-mono-pcm * riff-16khz-16bit-mono-pcm
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
See below for information about changes to Speech services and resources.
* Speech SDK 1.23.0 and Speech CLI 1.23.0 were released in July 2022. See details below. * Custom speech-to-text container v3.1.0 released in March 2022, with support to get display models.
-* TTS Service July 2022, new voices in Public Preview and new viseme feature blend shapes were released. See details below.
+* TTS Service August 2022, five new voices in public preview were released.
+* TTS Service September 2022, all the prebuilt neural voices have been upgraded to high-fidelity voices with 48kHz sample rate.
## Release notes
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
If the HTTP status is `200 OK`, the body of the response contains an audio file
## Audio outputs
-The supported streaming and non-streaming audio formats are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 48-kHz, 24-kHz, 16-kHz, and 8-kHz audio outputs. Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing.
+The supported streaming and non-streaming audio formats are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 48-kHz, 24-kHz, 16-kHz, and 8-kHz audio outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz.
#### [Streaming](#tab/streaming)
riff-48khz-16bit-mono-pcm
*** > [!NOTE]
-> en-US-AriaNeural, en-US-JennyNeural and zh-CN-XiaoxiaoNeural are available in public preview in 48Khz output. Other voices support 24khz upsampled to 48khz output.
-
-> [!NOTE]
+> If you select 48kHz output format, the high-fidelity voice model with 48kHz will be invoked accordingly. The sample rates other than 24kHz and 48kHz can be obtained through upsampling or downsampling when synthesizing, for example, 44.1kHz is downsampled from 48kHz.
+>
> If your selected voice and output format have different bit rates, the audio is resampled as necessary. You can decode the `ogg-24khz-16bit-mono-opus` format by using the [Opus codec](https://opus-codec.org/downloads/). ## Next steps
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
Here's more information about neural text-to-speech features in the Speech servi
* **Asynchronous synthesis of long audio**: Use the [Long Audio API](long-audio-api.md) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
-* **Prebuilt neural voices**: Microsoft neural text-to-speech capability uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. You can use neural voices to:
+* **Prebuilt neural voices**: Microsoft neural text-to-speech capability uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. You can use neural voices to:
- Make interactions with chatbots and voice assistants more natural and engaging. - Convert digital texts such as e-books into audiobooks.
cognitive-services Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/migrate.md
Previously updated : 07/27/2022 Last updated : 09/29/2022
Consider using one of the available quickstart articles to see the latest inform
## How do I migrate to the language service if I am using LUIS?
-If you're using Language Understanding (LUIS), you can [import your LUIS JSON file](../conversational-language-understanding/concepts/backwards-compatibility.md) to the new Conversational language understanding feature.
+If you're using Language Understanding (LUIS), you can [import your LUIS JSON file](../conversational-language-understanding/how-to/migrate-from-luis.md) to the new Conversational language understanding feature.
## How do I migrate to the language service if I am using QnA Maker?
cognitive-services Backwards Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/backwards-compatibility.md
- Title: Conversational Language Understanding backwards compatibility-
-description: Learn about backwards compatibility between LUIS and Conversational Language Understanding
------ Previously updated : 05/13/2022----
-# Backwards compatibility with LUIS applications
-
-You can reuse some of the content of your existing [LUIS](../../../LUIS/what-is-luis.md) applications in [conversational language understanding](../overview.md). When working with conversational language understanding projects, you can:
-* Create conversational language understanding conversation projects from LUIS application JSON files.
-* Create LUIS applications that can be connected to [orchestration workflow](../../orchestration-workflow/overview.md) projects.
-
-> [!NOTE]
-> This guide assumes you have created a Language resource. If you're getting started with the service, see the [quickstart article](../quickstart.md).
-
-## Import a LUIS application JSON file into Conversational Language Understanding
-
-### [Language Studio](#tab/studio)
-
-To import a LUIS application JSON file, click on the icon next to **Create a new project** and select **Import**. Then select the LUIS file. When you import a new project into Conversational Language Understanding, you can select an exported LUIS application JSON file, and the service will automatically create a project with the currently available features.
--
-### [REST API](#tab/rest-api)
----
-## Supported features
-When you import the LUIS JSON application into conversational language understanding, it will create a **Conversations** project with the following features will be selected:
-
-|**Feature**|**Notes**|
-|: - |: - |
-|Intents|All of your intents will be transferred as conversational language understanding intents with the same names.|
-|ML entities|All of your ML entities will be transferred as conversational language understanding entities with the same names. The labels will be persisted and used to train the Learned component of the entity. Structured ML entities will transfer over the lowest level subentities of the structure as different entities and apply their labels accordingly.|
-|Utterances|All of your LUIS utterances will be transferred as conversational language understanding utterances with their intent and entity labels. Structured ML entity labels will only consider the lowest level subentity labels, and all the top level entity labels will be ignored.|
-|Culture|The primary language of the Conversation project will be the LUIS app culture. If the culture is not supported, the importing will fail. |
-|List entities|All of your list entities will be transferred as conversational language understanding entities with the same names. The normalized values and synonyms of each list will be transferred as keys and synonyms in the list component for the conversational language understanding entity.|
-|Prebuilt entities|All of your prebuilt entities will be transferred as conversational language understanding entities with the same names. The conversational language understanding entity will have the relevant [prebuilt entities](entity-components.md#prebuilt-component) enabled if they are supported. |
-|Required entity features in ML entities|If you had a prebuilt entity or a list entity as a required feature to another ML entity, then the ML entity will be transferred as a conversational language understanding entity with the same name and its labels will apply. The conversational language understanding entity will include the required feature entity as a component. The [overlap method](entity-components.md#entity-options) will be set as ΓÇ£Exact OverlapΓÇ¥ for the conversational language understanding entity.|
-|Non-required entity features in ML entities|If you had a prebuilt entity or a list entity as a non-required feature to another ML entity, then the ML entity will be transferred as a conversational language understanding entity with the same name and its ML labels will apply. If an ML entity was used as a feature to another ML entity, it will not be transferred over.|
-|Roles|All of your roles will be transferred as conversational language understanding entities with the same names. Each role will be its own conversational language understanding entity. The roleΓÇÖs entity type will determine which component is populated for the role. Roles on prebuilt entities will transfer as conversational language understanding entities with the prebuilt entity component enabled and the role labels transferred over to train the Learned component. Roles on list entities will transfer as conversational language understanding entities with the list entity component populated and the role labels transferred over to train the Learned component. Roles on ML entities will be transferred as conversational language understanding entities with their labels applied to train the Learned component of the entity. |
-
-## Unsupported features
-
-When you import the LUIS JSON application into conversational language understanding, certain features will be ignored, but they will not block you from importing the application. The following features will be ignored:
-
-|**Feature**|**Notes**|
-|: - |: - |
-|Application Settings|The settings such as Normalize Punctuation, Normalize Diacritics, and Use All Training Data were meant to improve predictions for intents and entities. The new models in conversational language understanding are not sensitive to small changes such as punctuation and are therefore not available as settings.|
-|Features|Phrase list features and features to intents will all be ignored. Features were meant to introduce semantic understanding for LUIS that conversational language understanding can provide out of the box with its new models.|
-|Patterns|Patterns were used to cover for lack of quality in intent classification. The new models in conversational language understanding are expected to perform better without needing patterns.|
-|`Pattern.Any` Entities|`Pattern.Any` entities were used to cover for lack of quality in ML entity extraction. The new models in conversational language understanding are expected to perform better without needing `Pattern.Any` entities.|
-|Regex Entities| Not currently supported |
-|Structured ML Entities| Not currently supported |
-
-## Use a published LUIS application in orchestration workflow projects
-
-You can only connect to published LUIS applications that are owned by the same Language resource that you use for Conversational Language Understanding. You can change the authoring resource to a Language **S** resource in **West Europe** applications. See the [LUIS documentation](../../../luis/luis-how-to-azure-subscription.md#assign-luis-resources) for steps on assigning a different resource to your LUIS application. You can also export then import the LUIS applications into your Language resource. You must train and publish LUIS applications for them to appear in Conversational Language Understanding when you want to connect them to orchestration projects.
--
-## Next steps
-
-[Conversational Language Understanding overview](../overview.md)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/faq.md
Previously updated : 07/07/2022 Last updated : 09/29/2022
Unlike LUIS, you cannot label the same text as 2 different entities. Learned com
## Can I import a LUIS JSON file into conversational language understanding?
-Yes, you can [import any LUIS application](./concepts/backwards-compatibility.md) JSON file from the latest version in the service.
+Yes, you can [import any LUIS application](./how-to/migrate-from-luis.md) JSON file from the latest version in the service.
## Can I import a LUIS `.LU` file into conversational language understanding?
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/create-project.md
Previously updated : 06/03/2022 Last updated : 09/29/2022
You can export a Conversational Language Understanding project as a JSON file at
That project can be reimported as a new project. If you import a project with the exact same name, it replaces the project's data with the newly imported project's data.
-If you have an existing LUIS application, you can _import_ the LUIS application JSON to Conversational Language Understanding directly, and it will create a Conversation project with all the pieces that are currently available: Intents, ML entities, and utterances. See [backwards compatibility with LUIS](../concepts/backwards-compatibility.md) for more information.
+If you have an existing LUIS application, you can _import_ the LUIS application JSON to Conversational Language Understanding directly, and it will create a Conversation project with all the pieces that are currently available: Intents, ML entities, and utterances. See [the LUIS migration article](../how-to/migrate-from-luis.md) for more information.
To import a project, click on the arrow button next to **Create a new project** and select **Import**, then select the LUIS or Conversational Language Understanding JSON file.
cognitive-services Migrate From Luis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis.md
+
+ Title: Conversational Language Understanding backwards compatibility
+
+description: Learn about backwards compatibility between LUIS and Conversational Language Understanding
++++++ Last updated : 09/08/2022++++
+# Migrate from Language Understanding (LUIS) to conversational language understanding (CLU)
+
+[Conversational language understanding (CLU)](../overview.md) is a cloud-based AI offering in Azure Cognitive Services for Language. It's the newest generation of [Language Understanding (LUIS)](../../../luis/what-is-luis.md) and offers backwards compatibility with previously created LUIS applications. CLU employs state-of-the-art machine learning intelligence to allow users to build a custom natural language understanding model for predicting intents and entities in conversational utterances.
+
+CLU offers the following advantages over LUIS:
+
+- Improved accuracy with state-of-the-art machine learning models for better intent classification and entity extraction.
+- Multilingual support for model learning and training.
+- Ease of integration with different CLU and [custom question answering](../../question-answering/overview.md) projects using [orchestration workflow](../../orchestration-workflow/overview.md).
+- The ability to add testing data within the experience using Language Studio and APIs for model performance evaluation prior to deployment.
+
+To get started, you can [create a new project](../quickstart.md?pivots=language-studio#create-a-conversational-language-understanding-project) or [migrate your LUIS application](#migrate-your-luis-applications).
+
+## Comparison between LUIS and CLU
+
+The following table presents a side-by-side comparison between the features of LUIS and CLU. It also highlights the changes to your LUIS application after migrating to CLU. Click on the linked concept to learn more about the changes.
+
+|LUIS features | CLU features | Post migration |
+|::|:-:|:--:|
+|Machine-learned and Structured ML entities| Learned [entity components](#how-are-entities-different-in-clu) |Machine-learned entities without subentities will be transferred as CLU entities. Structured ML entities will only transfer leaf nodes (lowest level subentities without their own subentities) as entities in CLU. The name of the entity in CLU will be the name of the subentity concatenated with the parent. For example, _Order.Size_|
+|List and prebuilt entities| List and prebuilt [entity components](#how-are-entities-different-in-clu) | List and prebuilt entities will be transferred as entities in CLU with a populated entity component based on the entity type.|
+|Regex and `Pattern.Any` entities| Not currently available | `Pattern.Any` entities will be removed. Regex entities will be removed.|
+|Single culture for each application|[Multilingual models](#how-is-conversational-language-understanding-multilingual) enable multiple languages for each project. |The primary language of your project will be set as your LUIS application culture. Your project can be trained to extend to different languages.|
+|Entity roles |[Roles](#how-are-entity-roles-transferred-to-clu) are no longer needed. | Entity roles will be transferred as entities.|
+|Settings for: normalize punctuation, normalize diacritics, normalize word form, use all training data |[Settings](#how-is-the-accuracy-of-clu-better-than-luis) are no longer needed. |Settings will not be transferred. |
+|Patterns and phrase list features|[Patterns and Phrase list features](#how-is-the-accuracy-of-clu-better-than-luis) are no longer needed. |Patterns and phrase list features will not be transferred. |
+|Entity features| Entity components| List or prebuilt entities added as features to an entity will be transferred as added components to that entity. [Entity features](#how-do-entity-features-get-transferred-in-clu) will not be transferred for intents. |
+|Intents and utterances| Intents and utterances |All intents and utterances will be transferred. Utterances will be labeled with their transferred entities. |
+|Application GUIDs |Project names| A project will be created for each migrating application with the application name. Any special characters in the application names will be removed in CLU.|
+|Versioning| Can only be stored [locally](#how-do-i-manage-versions-in-clu). | A project will be created for the selected application version. |
+|Evaluation using batch testing |Evaluation using testing sets | [Uploading your testing dataset](../how-to/tag-utterances.md#how-to-label-your-utterances) will be required.|
+|Role-Based Access Control (RBAC) for LUIS resources |Role-Based Access Control (RBAC) available for Language resources |Language resource RBAC must be [manually added after migration](../../concepts/role-based-access-control.md). |
+|Single training mode| Standard and advanced [training modes](#how-are-the-training-times-different-in-clu) | Training will be required after application migration. |
+|Two publishing slots and version publishing |Ten deployment slots with custom naming | Deployment will be required after the applicationΓÇÖs migration and training. |
+|LUIS authoring APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Authoring REST APIs](/rest/api/language/conversational-analysis-authoring). | For more information, see the [quickstart article](../quickstart.md?pivots=rest-api) for information on the CLU authoring APIs. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU authoring APIs. |
+|LUIS Runtime APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Runtime APIs](/rest/api/language/conversation-analysis-runtime). CLU Runtime SDK support for [.NET](/dotnet/api/overview/azure/ai.language.conversations-readme-pre?view=azure-dotnet-preview&preserve-view=true) and [Python](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true). | See [how to call the API](../how-to/call-api.md#use-the-client-libraries-azure-sdk) for more information. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU runtime API response. |
+
+## Migrate your LUIS applications
+
+Use the following steps to migrate your LUIS application using either the LUIS portal or REST API.
+
+# [LUIS portal](#tab/luis-portal)
+
+## Migrate your LUIS applications using the LUIS portal
+
+Follow these steps to begin migration using the [LUIS Portal](https://www.luis.ai/):
+
+1. After logging into the LUIS portal, click the button on the banner at the top of the screen to launch the migration wizard. The migration will only copy your selected LUIS applications to CLU.
+
+ :::image type="content" source="../media/backwards-compatibility/banner.svg" alt-text="A screenshot showing the migration banner in the LUIS portal." lightbox="../media/backwards-compatibility/banner.svg":::
++
+ The migration overview tab provides a brief explanation of conversational language understanding and its benefits. Press Next to proceed.
+
+ :::image type="content" source="../media/backwards-compatibility/migration-overview.svg" alt-text="A screenshot showing the migration overview window." lightbox="../media/backwards-compatibility/migration-overview.svg":::
+
+1. Determine the Language resource that you wish to migrate your LUIS application to. If you have already created your Language resource, select your Azure subscription followed by your Language resource, and then click **Next**. If you don't have a Language resource, click the link to create a new Language resource. Afterwards, select the resource and click **Next**.
+
+ :::image type="content" source="../media/backwards-compatibility/select-resource.svg" alt-text="A screenshot showing the resource selection window." lightbox="../media/backwards-compatibility/select-resource.svg":::
+
+1. Select all your LUIS applications that you want to migrate, and specify each of their versions. Click **Next**. After selecting your application and version, you will be prompted with a message informing you of any features that won't be carried over from your LUIS application.
+
+ > [!NOTE]
+ > Special characters are not supported by conversational language understanding. Any special characters in your selected LUIS application names will be removed in your new migrated applications.
+
+ :::image type="content" source="../media/backwards-compatibility/select-applications.svg" alt-text="A screenshot showing the application selection window." lightbox="../media/backwards-compatibility/select-applications.svg":::
+
+1. Review your Language resource and LUIS applications selections. Click **Finish** to migrate your applications.
+
+1. A popup window will let you track the migration status of your applications. Applications that have not started migrating will have a status of **Not started**. Applications that have begun migrating will have a status of **In progress**, and once they have finished migrating their status will be **Succeeded**. A **Failed** application means that you must repeat the migration process. Once the migration has completed for all applications, select **Done**.
+
+ :::image type="content" source="../media/backwards-compatibility/migration-progress.svg" alt-text="A screenshot showing the application migration progress window." lightbox="../media/backwards-compatibility/migration-progress.svg":::
+
+1. After your applications have migrated, you can perform the following steps:
+
+ * [Train your model](../how-to/train-model.md?tabs=language-studio)
+ * [Deploy your model](../how-to/deploy-model.md?tabs=language-studio)
+ * [Call your deployed model](../how-to/call-api.md?tabs=language-studio)
+
+# [REST API](#tab/rest-api)
+
+## Migrate your LUIS applications using REST APIs
+
+Follow these steps to begin migration programmatically using the CLU Authoring REST APIs:
+
+1. Export your LUIS application in JSON format. You can use the [LUIS Portal](https://www.luis.ai/) to export your applications, or the [LUIS programmatic APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40).
+
+1. Submit a POST request using the following URL, headers, and JSON body to import LUIS application into your CLU project. CLU does not support names with special characters so remove any special characters from the project name.
+
+ ### Request URL
+ ```rest
+ {ENDPOINT}/language/authoring/analyze-conversations/projects/{PROJECT-NAME}/:import?api-version={API-VERSION}&format=luis
+ ```
+
+ |Placeholder |Value | Example |
+ ||||
+ |`{ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
+ |`{PROJECT-NAME}` | The name for your project. This value is case sensitive. | `myProject` |
+ |`{API-VERSION}` | The version of the API you are calling. The value referenced here is for the latest released [model version](../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) released. | `2022-05-01` |
+
+ ### Headers
+
+ Use the following header to authenticate your request.
+
+ |Key|Value|
+ |--|--|
+ |`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.|
+
+ ### JSON body
+
+ Use the exported LUIS JSON data as your body.
+
+1. After your applications have migrated, you can perform the following steps:
+
+ * [Train your model](../how-to/train-model.md?tabs=language-studio)
+ * [Deploy your model](../how-to/deploy-model.md?tabs=language-studio)
+ * [Call your deployed model](../how-to/call-api.md?tabs=language-studio)
+++
+## Frequently asked questions
+
+### Which LUIS JSON version is supported by CLU?
+
+CLU supports the model JSON version 7.0.0. If the JSON format is older, it would need to be imported into LUIS first, then exported from LUIS with the most recent version.
+
+### How are entities different in CLU?
+
+In CLU, a single entity can have multiple entity components, which are different methods for extraction. Those components are then combined together using rules you can define. The available components are: learned (equivalent to ML entities in LUIS), list, and prebuilt.
+
+After migrating, your structured machine-learned leaf nodes and bottom-level subentities will be transferred to the new CLU model while all the parent entities and higher-level entities will be ignored. The name of the entity will be the bottom-level entityΓÇÖs name concatenated with its parent entity.
+
+#### Example:
+
+LUIS entity:
+
+* Pizza Order
+ * Topping
+ * Size
+
+Migrated LUIS entity in CLU:
+
+* Pizza Order.Topping
+* Pizza Order.Size
+
+For more information on entity components, see [Entity components](../concepts/entity-components.md).
+
+### How are entity roles transferred to CLU?
+
+Your roles will be transferred as distinct entities along with their labeled utterances. Each roleΓÇÖs entity type will determine which entity component will be populated. For example, a list entity role will be transferred as an entity with the same name as the role, with a populated list component.
+
+### How is conversational language understanding multilingual?
+
+Conversational language understanding projects accept utterances in different languages. Furthermore, you can train your model in one language and extend it to predict in other languages.
+
+#### Example:
+
+Training utterance (English): *How are you?*
+
+Labeled intent: Greeting
+
+Runtime utterance (French): *Comment ça va?*
+
+Predicted intent: Greeting
+
+### How are entity confidence scores different in CLU?
+
+Any extracted entity has a 100% confidence score and therefore entity confidence scores should not be used to make decisions between entities.
+
+### How is the accuracy of CLU better than LUIS?
+
+CLU uses state-of-the-art models to enhance machine learning performance of different models of intent classification and entity extraction.
+
+These models are insensitive to minor variations, removing the need for the following settings: _Normalize punctuation_, _normalize diacritics_, _normalize word form_, and _use all training data_.
+
+Additionally, the new models do not support phrase list features as they no longer require supplementary information from the user to provide semantically similar words for better accuracy. Patterns were also used to provide improved intent classification using rule-based matching techniques that are not necessary in the new model paradigm.
+
+### How do I manage versions in CLU?
+
+Although CLU does not offer versioning, you can export your CLU projects using [Language Studio](https://language.cognitive.azure.com/home) or [programmatically](../how-to/fail-over.md#export-your-primary-project-assets) and store different versions of the assets locally.
+
+### Why is CLU classification different from LUIS? How does None classification work?
+
+CLU presents a different approach to training models by using multi-classification as opposed to binary classification. As a result, the interpretation of scores is different and also differs across training options. While you are likely to achieve better results, you have to observe the difference in scores and determine a new threshold for accepting intent predictions. You can easily add a confidence score threshold for the [None intent](../concepts/none-intent.md) in your project settings. This will return *None* as the top intent if the top intent did not exceed the confidence score threshold provided.
+
+### Do I need more data for CLU models than LUIS?
+
+The new CLU models have better semantic understanding of language than in LUIS, and in turn help make models generalize with a significant reduction of data. While you shouldnΓÇÖt aim to reduce the amount of data that you have, you should expect better performance and resilience to variations and synonyms in CLU compared to LUIS.
+
+### If I donΓÇÖt migrate my LUIS apps, will they be deleted?
+
+Your existing LUIS applications will be available until October 1, 2025. After that time you will no longer be able to use those applications, the service endpoints will no longer function, and the applications will be permanently deleted.
+
+### Are .LU files supported on CLU?
+
+Only JSON format is supported by CLU. You can import your .LU files to LUIS and export them in JSON format, or you can follow the migration steps above for your application.
+
+### What are the service limits of CLU?
+
+See the [service limits](../service-limits.md) article for more information.
+
+### Do I have to refactor my code if I migrate my applications from LUIS to CLU?
+
+The API objects of CLU applications are different from LUIS and therefore code refactoring will be necessary.
+
+If you are using the LUIS [programmatic](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40) and [runtime](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) APIs, you can replace them with their equivalent APIs.
+
+[CLU authoring APIs](/rest/api/language/conversational-analysis-authoring): Instead of LUIS's specific CRUD APIs for individual actions such as _add utterance_, _delete entity_, and _rename intent_, CLU offers an [import API](/rest/api/language/conversational-analysis-authoring/import) that replaces the full content of a project using the same name. If your service used LUIS programmatic APIs to provide a platform for other customers, you must consider this new design paradigm. All other APIs such as: _listing projects_, _training_, _deploying_, and _deleting_ are available. APIs for actions such as _importing_ and _deploying_ are asynchronous operations instead of synchronous as they were in LUIS.
+
+[CLU runtime APIs](/rest/api/language/conversation-analysis-runtime/analyze-conversation): The new API request and response includes many of the same parameters such as: _query_, _prediction_, _top intent_, _intents_, _entities_, and their values. The CLU response object offers a more straightforward approach. Entity predictions are provided as they are within the utterance text, and any additional information such as resolution or list keys are provided in extra parameters called `extraInformation` and `resolution`. See the [reference documentation](/rest/api/language/conversation-analysis-runtime/analyze-conversation) for more information on the API response structure.
+
+You can use the [.NET](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0-beta.3/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples/) or [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md) CLU runtime SDK to replace the LUIS runtime SDK. There is currently no authoring SDK available for CLU.
+
+### How are the training times different in CLU?
+
+CLU offers standard training, which trains and learns in English and is comparable to the training time of LUIS. It also offers advanced training, which takes a considerably longer duration as it extends the training to all other [supported languages](../language-support.md).
+
+### How can I link subentities to parent entities from my LUIS application in CLU?
+
+One way to implement the concept of subentities in CLU is to combine the subentities into different entity components within the same entity.
+
+#### Example:
+
+LUIS Implementation:
+
+* Pizza Order (entity)
+ * Size (subentity)
+ * Quantity (subentity)
+
+CLU Implementation:
+
+* Pizza Order (entity)
+ * Size (list entity component: small, medium, large)
+ * Quantity (prebuilt entity component: number)
+
+In CLU, you would label the entire span for _Pizza Order_ inclusive of the size and quantity, which would return the pizza order with a list key for size, and a number value for quantity in the same entity object.
+
+For more complex problems where entities contain several levels of depth, you can create a project for each couple of levels of depth in the entity structure. This gives you the option to:
+1. Pass the utterance to each project.
+1. Combine the analyses of each project in the stage proceeding CLU.
+
+For a detailed example on this concept, check out the pizza bot sample available on [GitHub](https://github.com/Azure-Samples/cognitive-service-language-samples/tree/main/CoreBotWithCLU).
+
+### How do entity features get transferred in CLU?
+
+Entities used as features for intents will not be transferred. Entities used as features for other entities will populate the relevant component of the entity. For example, if a list entity named _SizeList_ was used as a feature to a machine-learned entity named _Size_, then the _Size_ entity will be transferred to CLU with the list values from _SizeList_ added to its list component.
+
+### How will my LUIS applications be named in CLU after migration?
+
+Any special characters in the LUIS application name will be removed. If the cleared name length is greater than 50 characters, the extra characters will be removed. If the name after removing special characters is empty (for example, if the LUIS application name was `@@`), the new name will be _untitled_. If there is already a conversational language understanding project with the same name, the migrated LUIS application will be appended with `_1` for the first duplicate and increase by 1 for each additional duplicate. In case the new nameΓÇÖs length is 50 characters and it needs to be renamed, the last 1 or 2 characters will be removed to be able to concatenate the number and still be within the 50 characters limit.
+
+## Migration from LUIS Q&A
+
+If you have any questions that were unanswered in this article, consider leaving your questions at our [Microsoft Q&A thread](https://aka.ms/luis-migration-qna-thread).
+
+## Next steps
+* [Quickstart: create a CLU project](../quickstart.md)
+* [CLU language support](../language-support.md)
+* [CLU FAQ](../faq.md)
cognitive-services Integrate Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/tutorials/integrate-power-bi.md
In this tutorial, you'll learn how to:
## Load customer data
-To get started, open Power BI Desktop and load the comma-separated value (CSV) file `FabrikamComments.csv` that you downloaded in [Prerequisites](#prerequisites). This file represents a day's worth of hypothetical activity in a fictional small company's support forum.
+To get started, open Power BI Desktop and load the comma-separated value (CSV) file that you downloaded as part of the [prerequisites](#prerequisites). This file represents a day's worth of hypothetical activity in a fictional small company's support forum.
> [!NOTE] > Power BI can use data from a wide variety of web-based sources, such as SQL databases. See the [Power Query documentation](/power-query/connectors/) for more information.
In the main Power BI Desktop window, select the **Home** ribbon. In the **Extern
![The Get Data button](../media/tutorials/power-bi/get-data-button.png)
-The Open dialog appears. Navigate to your Downloads folder, or to the folder where you downloaded the `FabrikamComments.csv` file. Click `FabrikamComments.csv`, then the **Open** button. The CSV import dialog appears.
+The Open dialog appears. Navigate to your Downloads folder, or to the folder where you downloaded the CSV file. Click on the name of the file, then the **Open** button. The CSV import dialog appears.
![The CSV Import dialog](../media/tutorials/power-bi/csv-import.png)
communication-services Emergency Calling Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/emergency-calling-concept.md
Emergency dialing is automatically enabled for all users of the Azure Communicat
The Emergency service is temporarily free to use for Azure Communication Services customers within reasonable use, however, billing for the service will be enabled in 2022. Calls to 911 are capped at 10 concurrent calls per Azure resource.
+## Emergency calling with Azure Communication Services direct routing
+
+Emergency call is a regular call from a direct routing perspective. If you want to implement emergency calling with Azure Communication Services direct routing, you need to make sure that there is a routing rule for your emergency number (911, 112, etc.). You also need to make sure that your carrier processes emergency calls properly.
+There is also an option to use purchased number as a caller ID for direct routing calls, in such case if there is no voice routing rule for emergency number, the call will fall back to Microsoft network, and we will treat it as a regular emergency call. Learn more about [voice routing fall back](./direct-routing-provisioning.md#voice-routing-considerations).
+ ## Next steps ### Quickstarts
communication-services Inbound Calling Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/inbound-calling-capabilities.md
Inbound PSTN calling is currently supported in GA for Dynamics Omnichannel. You
**Inbound calling with Dynamics 365 Omnichannel (OC)**
- Supported in General Availability, to set up inbound calling for Dynamics 365 OC with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling)
+Supported in General Availability, to set up inbound calling for Dynamics 365 OC with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling)
- **Inbound calling with Power Virtual Agents**
+**Inbound calling with Power Virtual Agents**
- *Coming soon*
+*Coming soon*
-**Inbound calling with ACS Client Calling SDK**
+**Inbound calling with ACS Call Automation SDK**
-*Coming soon*
+[Available in private preview](../voice-video-calling/call-automation.md)
**Inbound calling with Azure Bot Framework**
communication-services Known Limitations Acs Telephony https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/known-limitations-acs-telephony.md
+
+ Title: Azure direct routing known limitations - Azure Communication Services
+description: Known limitations of direct routing in Azure Communication Services.
+++++ Last updated : 09/29/2022+++++
+# Known limitations in Azure telephony
+
+This article provides information about limitations and known issues related to telephony in Azure Communication Services.
+
+## Azure Communication Services direct routing known limitations
+
+- Anonymous calling isn't supported
+ - will be fixed in GA release
+- Different set of Media Processors (MP) is used with different IP addresses. Currently [any Azure IP address](./direct-routing-infrastructure.md#media-traffic-ip-and-port-ranges) can be used for media connection between Azure MP and Session Border Controller (SBC).
+ - will be fixed in GA release
+- Azure Communication Services SBC Fully Qualified Domain Name (FQDN) must be different from Teams Direct Routing SBC FQDN
+- Wildcard SBC certificates require extra workaround. Contact Azure support for details.
+ - will be fixed in GA release
+- Media bypass/optimization isn't supported
+- No indication of SBC connection status/details in Azure portal
+ - will be fixed in GA release
+- Azure Communication Services direct routing isn't available in Government Clouds
+- Multi-tenant trunks aren't supported
+- Location-based routing isn't supported
+- No quality dashboard is available for customers
+- Enhanced 911 isn't supported
+- PSTN numbers missing from Call Summary logs
+
+## Next steps
+
+### Conceptual documentation
+
+- [Phone number types in Azure Communication Services](./plan-solution.md)
+- [Plan for Azure direct routing](./direct-routing-infrastructure.md)
+- [Pair the Session Border Controller and configure voice routing](./direct-routing-provisioning.md)
+- [Pricing](../pricing.md)
+
+### Quickstarts
+
+- [Call to Phone](../../quickstarts/telephony/pstn-call.md)
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
Title: Connect to Azure Blob Storage
-description: Create workflows that manage blobs in Azure storage accounts using Azure Logic Apps.
+ Title: Connect to Azure Blob Storage from workflows
+description: Connect to Azure Blob Storage from workflows using Azure Logic Apps.
ms.suite: integration Previously updated : 08/19/2022 Last updated : 09/14/2022 tags: connectors
-# Create and manage blobs in Azure Blob Storage by using Azure Logic Apps
+# Connect to Azure Blob Storage from workflows in Azure Logic Apps
-From your workflow in Azure Logic Apps, you can access and manage files stored as blobs in your Azure storage account by using the [Azure Blob Storage connector](/connectors/azureblobconnector/). This connector provides triggers and actions that your workflow can use for blob operations. You can then automate tasks to manage files in your storage account. For example, [connector actions](/connectors/azureblobconnector/#actions) include checking, deleting, reading, and uploading blobs. The [available trigger](/connectors/azureblobconnector/#triggers) fires when a blob is added or modified.
-You can connect to Blob Storage from both **Logic App (Consumption)** and **Logic App (Standard)** resource types. You can use the connector with logic app workflows in multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE). With **Logic App (Standard)**, you can use either the *built-in* **Azure Blob** operations or the **Azure Blob Storage** managed connector operations.
+This article shows how to access your Azure Blob Storage account and container from a workflow in Azure Logic Apps using the Azure Blob Storage connector. This connector provides triggers and actions that your workflow can use for blob operations. You can then create automated workflows that run when triggered by events in your storage container or in other systems, and run actions to work with data in your storage container.
-## Prerequisites
+For example, you can access and manage files stored as blobs in your Azure storage account.
-- An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+You can connect to Azure Blob Storage from a workflow in **Logic App (Consumption)** and **Logic App (Standard)** resource types. You can use the connector with logic app workflows in multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE). With **Logic App (Standard)**, you can use either the **Azure Blob** *built-in* connector operations or the **Azure Blob Storage** managed connector operations.
-- An [Azure storage account and storage container](../storage/blobs/storage-quickstart-blobs-portal.md)
+## Connector technical reference
-- A logic app workflow from which you want to access your Azure Storage account. If you want to start your workflow with a Blob trigger, you need a [blank logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+The Azure Blob Storage connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
-## Limits
+| Logic app | Environment | Connector version |
+|--|-|-|
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector (Standard class). For more information, review the following documentation: <br><br>- [Azure Blob Storage managed connector reference](/connectors/azureblobconnector) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector (Standard class) and ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [Azure Blob Storage managed connector reference](/connectors/azureblobconnector) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector (Azure-hosted) and built-in connector, which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version differs in the following ways: <br><br>- The built-in version connects directly to your Azure Storage account requiring only a connection string. <br><br>- The built-in version can directly access Azure virtual networks. <br><br>For more information, review the following documentation: <br><br>- [Azure Blob Storage managed connector reference](/connectors/azureblobconnector) <br>- [Azure Blob built-in connector reference](#built-in-connector-operations) section later in this article <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
+
+## Limitations
- For logic app workflows running in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE-labeled version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead. -- By default, Blob actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB but up to 1024 MB, Blob actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The [**Get blob content** action](/connectors/azureblobconnector/#get-blob-content) implicitly uses chunking.
+- By default, Azure Blob Storage managed connector actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB but up to 1024 MB, Blob actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The [**Get blob content** action](/connectors/azureblobconnector/#get-blob-content) implicitly uses chunking.
+
+- Azure Blob Storage triggers don't support chunking. When a trigger requests file content, the trigger selects only files that are 50 MB or smaller. To get files larger than 50 MB, follow this pattern:
-- Blob triggers don't support chunking. When a trigger requests file content, the trigger selects only files that are 50 MB or smaller. To get files larger than 50 MB, follow this pattern:
+ 1. Use a Blob trigger that returns file properties, such as [**When a blob is added or modified (properties only)**](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)).
+
+ 1. Follow the trigger with the Azure Blob Storage managed connector action named [**Get blob content**](/connectors/azureblobconnector/#get-blob-content), which reads the complete file and implicitly uses chunking.
+
+## Prerequisites
- - Use a Blob trigger that returns file properties, such as [**When a blob is added or modified (properties only)**](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)).
+- An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- - Follow the trigger with the Blob action named [**Get blob content**](/connectors/azureblobconnector/#get-blob-content), which reads the complete file and implicitly uses chunking.
+- An [Azure storage account and blob container](../storage/blobs/storage-quickstart-blobs-portal.md)
-## Connector reference
+- A logic app workflow from which you want to access your Azure Storage account. If you want to start your workflow with an Azure Blob Storage trigger, you need a [blank logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-For more technical details about this connector, such as triggers, actions, and limits, review the [connector's reference page](/connectors/azureblobconnector/).
+- The logic app workflow where you connect to your Azure Storage account. To start your workflow with an Azure Blob trigger, you have to start with a blank workflow. To use an Azure Blob action in your workflow, start your workflow with any trigger.
<a name="add-trigger"></a> ## Add a Blob trigger
-In Azure Logic Apps, every workflow must start with a [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts), which fires when a specific event happens or when a specific condition is met.
+A Consumption logic app workflow can use only the Azure Blob Storage managed connector. However, a Standard logic app workflow can use the Azure Blob Storage managed connector and the Azure blob built-in connector. Although both connector versions have only one Blob trigger, the trigger name differs as follows, based on whether you're working with a Consumption or Standard workflow:
-Only one Blob trigger exists and has either of the following names, based on whether you're working with a Consumption or Standard logic app workflow:
+| Logic app | Connector version | Trigger name | Description |
+|--|-|--|-|
+| Consumption | Managed connector only | **When a blob is added or modified (properties only)** | The trigger fires when a blob's properties are added or updated in your storage container's root folder. When you set up the managed trigger, the managed version ignores existing blobs in your storage container. |
+| Standard | - Built-in connector <br><br>- Managed connector | - Built-in: **When a blob is added or updated** <br><br>- Managed: **When a blob is added or modified (properties only)** | - Built-in: The trigger fires when a blob is added or updated in your storage container, and fires for any nested folders in your storage container, not just the root folder. When you set up the built-in trigger, the built-in version processes all existing blobs in your storage container. <br><br>- Managed: The trigger fires when a blob's properties are added or updated in your storage container's root folder. When you set up the managed trigger, the managed version ignores existing blobs in your storage container. |
-| Logic app type | Trigger name | Description |
-|-|--|-|
-| Consumption | Managed connector only: **When a blob is added or modified (properties only)** | The trigger fires when a blob's properties are added or updated in your storage container's root folder. |
-| Standard | - Built-in: **When a blob is Added or Modified in Azure Storage** <br><br>- Managed connector: **When a blob is added or modified (properties only)** | - Built-in: The trigger fires when a blob is added or updated in your storage container. The trigger also fires for any nested folders in your storage container, not just the root folder. <br><br>- Managed connector: The trigger fires when a blob's properties are added or updated in your storage container's root folder. |
-||||
+The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use the following tools to create logic app workflows:
-> [!IMPORTANT]
-> When you set up the Blob trigger, the built-in version processes all existing blobs in the container, while the managed version ignores existing blobs in the container.
+- Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md)
-When the trigger fires each time, Azure Logic Apps creates a logic app instance and starts running the workflow.
+- Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
### [Consumption](#tab/consumption)
-To add a Blob trigger to a logic app workflow in multi-tenant Azure Logic Apps, follow these steps:
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+1. On the designer, under the search box, select **Standard**. In the search box, enter **Azure blob**.
+
+1. From the triggers list, select the trigger that you want.
+
+ This example continues with the trigger named **When a blob is added or modified (properties only)**.
+
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-add-trigger.png" alt-text="Screenshot showing Azure portal, Consumption workflow designer, and Azure Blob Storage trigger selected.":::
+
+1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**.
+
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for your connection |
+ | **Authentication Type** | Yes | The [authentication type](../storage/common/authorize-data-access.md) for your storage account. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
-1. Under the designer search box, make sure that **All** is selected. In the search box, enter **Azure blob**. From the **Triggers** list, select the trigger named **When a blob is added or modified (properties only)**.
+ For example, this connection uses access key authentication and provides the access key value for the storage account along with the following property values:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-trigger-add.png" alt-text="Screenshot showing Azure portal and workflow designer with a Consumption logic app and the trigger named 'When a blob is added or modified (properties only)' selected.":::
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. |
+ | **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Show**. Copy and save the primary key value. |
-1. If you're prompted for connection details, [create a connection to your Azure Blob Storage account](#connect-blob-storage-account).
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-trigger-create-connection.png" alt-text="Screenshot showing Consumption workflow, Azure Blob Storage trigger, and example connection information.":::
-1. Provide the necessary information for the trigger.
+1. After the trigger information box appears, provide the necessary information.
- 1. For the **Container** property value, select the folder icon to browse for your blob storage container. Or, enter the path manually using the syntax **/<*container-name*>**, for example:
+ For the **Container** property value, select the folder icon to browse for your blob container. Or, enter the path manually using the syntax **/<*container-name*>**, for example:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-trigger-configure.png" alt-text="Screenshot showing Azure Blob trigger with parameters configuration.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-trigger-information.png" alt-text="Screenshot showing Consumption workflow with Azure Blob Storage trigger, and example trigger information.":::
- 1. Configure other trigger settings as needed.
+1. To add other properties available for this trigger, open the **Add new parameter list**, and select the properties that you want.
-1. Add one or more actions to your workflow.
+ For more information, review [Azure Blob Storage managed connector trigger properties](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)-(v2)).
-1. On the designer toolbar, select **Save** to save your changes.
+1. Add any other actions that your workflow requires.
+
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
### [Standard](#tab/standard)
-To add a Blob trigger to a logic app workflow in single-tenant Azure Logic Apps, follow these steps:
+The steps to add and use a Blob trigger differ based on whether you want to use the built-in connector or the managed, Azure-hosted connector.
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+- [**Built-in trigger**](#built-in-connector-trigger): Describes the steps to add the built-in trigger.
+
+- [**Managed trigger**](#managed-connector-trigger): Describes the steps to add the managed trigger.
+
+<a name="built-in-connector-trigger"></a>
+
+#### Built-in connector trigger
-1. On the designer, select **Choose an operation**.
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-1. In the **Add a trigger** pane that opens, under the **Choose an operation** search box, you can select either **Built-in** to find the **Azure Blob** *built-in* trigger, or select **Azure** to find the **Azure Blob Storage** *managed connector* trigger.
+1. On the designer, select **Choose an operation**. Under the **Choose an operation** search box, select **Built-in**.
- This example uses the built-in **Azure Blob** trigger.
+1. In the search box, enter **Azure blob**. From the triggers list, select the trigger that you want.
-1. Under the search box, select **Built-in**. In the search box, enter **Azure blob**.
+ This example continues with the trigger named **When a blob is added or updated**.
-1. From the **Triggers** list, select the built-in trigger named **When a blob is Added or Modified in Azure Storage**.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-add-built-in-trigger.png" alt-text="Screenshot showing Azure portal, Standard workflow designer, and Azure Blob built-in trigger selected.":::
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-trigger-add.png" alt-text="Screenshot showing Azure portal, workflow designer, Standard logic app workflow and Azure Blob trigger selected.":::
+1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**.
-1. If you're prompted for connection details, [create a connection to your Azure Storage account](#connect-blob-storage-account).
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for your connection |
+ | **Authentication type** | Yes | The [authentication type](../storage/common/authorize-data-access.md) for your storage account. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
+
+ For example, this connection uses connection string authentication and provides the connection string value for the storage account:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Storage account connection string** | Yes, <br>but only for connection string authentication | <*storage-account-connection-string*> | The connection string for your Azure storage account. <br><br>**Note**: To find the connection string, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Connection string** > **Show**. Copy and save the connection string for the primary key. |
-1. Provide the necessary information for the trigger. On the **Parameters** tab, in the **Blob Path** property, enter the name of the folder that you want to monitor.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-create-connection.png" alt-text="Screenshot showing Standard workflow, Azure Blob built-in trigger, and example connection information.":::
+
+1. After the trigger information box appears, provide the necessary information.
+
+ For the **Blob path** property, enter the name of the folder that you want to monitor.
1. To find the folder name, open your storage account in the Azure portal.
To add a Blob trigger to a logic app workflow in single-tenant Azure Logic Apps,
1. Select your blob container. Find the name for the folder that you want to monitor.
- 1. Return to the workflow designer. In the trigger's **Blob Path** property, enter the path for the container, folder, or blob, based on whether you're checking for new blobs or changes to an existing blob. The syntax varies based on the check that you want to run and any filtering that you want to use:
+ 1. Return to the designer. In the **Blob path** property, enter the path for the container, folder, or blob, based on whether you're checking for new blobs or changes to an existing blob. The syntax varies based on the check that you want to run and any filtering that you want to use:
| Task | Path syntax | ||-|
To add a Blob trigger to a logic app workflow in single-tenant Azure Logic Apps,
| Check the root folder for changes to any blobs with names starting with a specific string, for example, **Sample-**. | **<*container-name*>/Sample-{name}** <br><br>**Important**: Make sure that you use **{name}** as a literal. | | Check a subfolder for a newly added blob. | **<*container-name*>/<*subfolder*>/{blobname}.{blobextension}** <br><br>**Important**: Make sure that you use **{blobname}.{blobextension}** as a literal. | | Check a subfolder for changes to a specific blob. | **<*container-name*>/<*subfolder*>/<*blob-name*>.<*blob-extension*>** |
- |||
For more syntax and filtering options, review [Azure Blob storage trigger for Azure Functions](../azure-functions/functions-bindings-storage-blob-trigger.md#blob-name-patterns). The following example shows a trigger setup that checks the root folder for a newly added blob:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-trigger-root-folder.png" alt-text="Screenshot showing the workflow designer for a Standard logic app workflow with an Azure Blob trigger set up for the root folder.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-root-folder.png" alt-text="Screenshot showing Standard workflow with Azure Blob built-in trigger set up for root folder.":::
The following example shows a trigger setup that checks a subfolder for changes to an existing blob:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-trigger-sub-folder-existing-blob.png" alt-text="Screenshot showing the workflow designer for a Standard logic app workflow with an Azure Blob trigger set up for a subfolder and specific blob.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-subfolder-existing-blob.png" alt-text="Screenshot showing Standard workflow with Azure Blob built-in trigger set up for a subfolder and specific blob.":::
+
+1. Add any other actions that your workflow requires.
-1. Continue creating your workflow by adding one or more actions.
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-1. On the designer toolbar, select **Save** to save your changes.
+<a name="managed-connector-trigger"></a>
+
+#### Managed connector trigger
+
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. On the designer, select **Choose an operation**. Under the search box, select **Azure**.
+
+1. In the search box, enter **Azure blob**.
+
+1. From the triggers list, select the trigger that you want.
+
+ This example continues with the trigger named **When a blob is added or modified (properties only)**.
+
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-add-managed-trigger.png" alt-text="Screenshot showing Azure portal, Standard logic app workflow designer, and Azure Blob Storage managed trigger selected.":::
+
+1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**.
+
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for your connection |
+ | **Authentication Type** | Yes | The [authentication type](../storage/common/authorize-data-access.md) for your storage account. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
+
+ For example, this connection uses access key authentication and provides the access key value for the storage account along with the following property values:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. |
+ | **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Show**. Copy and save the primary key value. |
+
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-trigger-create-connection.png" alt-text="Screenshot showing Standard workflow, Azure Blob managed trigger, and example connection information.":::
+
+1. After the trigger information box appears, provide the necessary information.
+
+ For the **Container** property value, select the folder icon to browse for your blob storage container. Or, enter the path manually using the syntax **/<*container-name*>**, for example:
+
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-trigger.png" alt-text="Screenshot showing Azure Blob Storage managed trigger with parameters configuration.":::
+
+1. To add other properties available for this trigger, open the **Add new parameter list** and select those properties. For more information, review [Azure Blob Storage managed connector trigger properties](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)-(v2)).
+
+1. Add any other actions that your workflow requires.
+
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
To add a Blob trigger to a logic app workflow in single-tenant Azure Logic Apps,
## Add a Blob action
-In Azure Logic Apps, an [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) is a step in your workflow that follows a trigger or another action.
+A Consumption logic app workflow can use only the Azure Blob Storage managed connector. However, a Standard logic app workflow can use the Azure Blob Storage managed connector and the Azure blob built-in connector. Each version has multiple, but differently named actions. For example, both managed and built-in connector versions have their own actions to get file metadata and get file content.
+
+- Managed connector actions: These actions run in a Consumption or Standard workflow.
+
+- Built-in connector actions: These actions run only in a Standard workflow.
+
+The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use the following tools to create and edit logic app workflows:
+
+- Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md)
+
+- Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
### [Consumption](#tab/consumption)
-To add a Blob action to a logic app workflow in multi-tenant Azure Logic Apps, follow these steps:
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-1. In the [Azure portal](https://portal.azure.com), open your workflow in the designer.
+1. If your workflow is blank, add the trigger that your workflow requires.
-1. If your workflow is blank, add any trigger that you want.
+ This example uses the [**Recurrence** trigger](connectors-native-recurrence.md).
- This example starts with the [**Recurrence** trigger](connectors-native-recurrence.md).
+1. Under the trigger or action where you want to add the Blob action, select **New step**.
+
+ Or, to add an action between steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
+
+1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **Azure blob**.
+
+1. From the actions list, select the action that you want.
+
+ This example continues with the action named **Get blob content**.
+
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-add-action.png" alt-text="Screenshot showing Azure portal, Consumption workflow designer, and Azure Blob Storage action selected.":::
-1. Under the trigger or action where you want to add the Blob action, select **New step** or **Add an action**, if between steps. This example uses the built-in Azure Blob action.
+1. If prompted, provide the following information for your connection. When you're done, select **Create**.
-1. Under the designer search box, make sure that **All** is selected. In the search box, enter **Azure blob**. Select the Blob action that you want to use.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for your connection |
+ | **Authentication Type** | Yes | The [authentication type](../storage/common/authorize-data-access.md) for your storage account. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
- This example uses the action named **Get blob content**.
+ For example, this connection uses access key authentication and provides the access key value for the storage account along with the following property values:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-add.png" alt-text="Screenshot showing Consumption logic app in designer with available Blob actions.":::
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. |
+ | **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Show**. Copy and save the primary key value. |
-1. If you're prompted for connection details, [create a connection to your Azure Storage account](#connect-blob-storage-account).
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-create-connection.png" alt-text="Screenshot showing Consumption workflow, Azure Blob action, and example connection information.":::
-1. Provide the necessary information for the action.
+1. After the action information box appears, provide the necessary action information.
For example, in the **Get blob content** action, provide your storage account name. For the **Blob** property value, select the folder icon to browse for your storage container or folder. Or, enter the path manually.
To add a Blob action to a logic app workflow in multi-tenant Azure Logic Apps, f
The following example shows the action setup that gets the content from a blob in the root folder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-root-folder.png" alt-text="Screenshot showing Consumption logic app in designer with Blob action setup for root folder.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-root-folder.png" alt-text="Screenshot showing Consumption workflow with Blob action setup for root folder.":::
The following example shows the action setup that gets the content from a blob in the subfolder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-sub-folder.png" alt-text="Screenshot showing Consumption logic app in designer with Blob action setup for subfolder.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-sub-folder.png" alt-text="Screenshot showing Consumption workflow with Blob action setup for subfolder.":::
-1. Set up other action settings as needed.
+1. Add any other actions that your workflow requires.
+
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
### [Standard](#tab/standard)
-To add an Azure Blob action to a logic app workflow in single-tenant Azure Logic Apps, follow these steps:
+The steps to add and use an Azure Blob action differ based on whether you want to use the built-in connector or the managed, Azure-hosted connector.
-1. In the [Azure portal](https://portal.azure.com), open your workflow in the designer.
+- [**Built-in action**](#built-in-connector-action): Describes the steps to add a built-in action.
-1. If your workflow is blank, add any trigger that you want.
+- [**Managed action**](#managed-connector-action): Describes the steps to add a managed action.
- This example starts with the [**Recurrence** trigger](connectors-native-recurrence.md).
+<a name="built-in-connector-action"></a>
+
+#### Built-in connector action
-1. Under the trigger or action where you want to add the Blob action, select **Insert a new step** (**+**) > **Add an action**.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+
+1. If your workflow is blank, add the trigger that your workflow requires.
+
+ This example uses the [**Recurrence** trigger](connectors-native-recurrence.md).
+
+1. Under the trigger or action where you want to add the Blob action, select the plus sign (**+**), and then select **Add an action**.
+
+ Or, to add an action between steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**.
+
+1. On the **Add an action** pane, under the search box, select **Built-in**. In the search box, enter **Azure blob**.
+
+1. From the actions list, select the action that you want.
-1. On the designer, make sure that **Add an operation** is selected. In the **Add an action** pane that opens, under the **Choose an operation** search box, select either **Built-in** to find the **Azure Blob** *built-in* actions, or select **Azure** to find the **Azure Blob Storage** *managed connector* actions.
+ This example continues with the action named **Read blob content**, which only reads the blob content. To later view the content, add a different action that creates a file with the blob content using another connector. For example, you can add a OneDrive action that creates a file based on the blob content.
-1. In the search box, enter **Azure blob**. Select the Azure Blob action that you want to use.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-add-built-in-action.png" alt-text="Screenshot showing Azure portal, Standard workflow designer, and Azure Blob built-in action selected.":::
- This example uses the action named **Reads Blob Content from Azure Storage**, which only reads the blob content. To later view the content, add a different action that creates a file with the blob content using another connector. For example, you can add a OneDrive action that creates a file based on the blob content.
+1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**.
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-action-add.png" alt-text="Screenshot showing the Azure portal and workflow designer with a Standard logic app workflow and the available Azure Blob Storage actions.":::
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for your connection |
+ | **Authentication type** | Yes | The [authentication type](../storage/common/authorize-data-access.md) for your storage account. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
+
+ For example, this connection uses connection string authentication and provides the connection string value for the storage account:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Storage account connection string** | Yes, <br>but only for connection string authentication | <*storage-account-connection-string*> | The connection string for your Azure storage account. <br><br>**Note**: To find the connection string, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Connection string** > **Show**. Copy and save the connection string for the primary key. |
-1. If you're prompted for connection details, [create a connection to your Azure Storage account](#connect-blob-storage-account).
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-create-connection.png" alt-text="Screenshot showing Standard workflow, Azure Blob built-in trigger, and example connection information.":::
-1. For the action, provide the necessary information, which includes the following values for the **Read Blob Content from Azure Storage** action:
+1. In the action information box, provide the necessary information.
+
+ For example, the **Read blob content** action requires the following property values:
| Property | Required | Description | |-|-|-|
- | **Container Name** | Yes | The name for the storage container that you want to use |
+ | **Container name** | Yes | The name for the storage container that you want to use |
| **Blob name** | Yes | The name or path for the blob that you want to use |
- ||||
The following example shows the information for a specific blob in the root folder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-action-root-folder.png" alt-text="Screenshot showing Standard logic app in designer with Blob action setup for root folder.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-action-root-folder.png" alt-text="Screenshot showing Standard workflow with Blob built-in action setup for root folder.":::
The following example shows the information for a specific blob in a subfolder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-action-subfolder.png" alt-text="Screenshot showing Standard logic app in designer with Blob action setup for subfolder.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-action-subfolder.png" alt-text="Screenshot showing Standard workflow with Blob built-in action setup for subfolder.":::
-1. Configure any other action settings as needed.
+1. Add any other actions that your workflow requires.
-1. On the designer toolbar, select **Save**.
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-1. Test your logic app to make sure your selected container contains a blob.
+<a name="managed-connector-action"></a>
-
+#### Managed connector action
-<a name="connect-blob-storage-account"></a>
+1. In the [Azure portal](https://portal.azure.com), open your workflow in the designer.
-## Connect to Azure Storage account
+1. If your workflow is blank, add any trigger that you want.
+ This example starts with the [**Recurrence** trigger](connectors-native-recurrence.md).
-### [Consumption](#tab/consumption)
+1. Under the trigger or action where you want to add the Blob action, select **New step**.
-Before you can configure your [Azure Blob Storage trigger](#add-trigger) or [Azure Blob Storage action](#add-action), you need to connect to your Azure Storage account.
+ Or, to add an action between steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
-Based on the [authentication type that your storage account requires](../storage/common/authorize-data-access.md), you have to provide a connection name and select the authentication type at a minimum.
+1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **Azure blob**.
-For example, if your storage account requires *access key* authorization, you have to provide the following information:
+1. From the actions list, select the Blob action that you want.
-| Property | Required | Value | Description |
-|-|-|-|-|
-| **Connection name** | Yes | <*connection-name*> | The name to use for your connection. |
-| **Authentication type** | Yes | - **Access Key** <br><br>- **Azure AD Integrated** <br><br>- **Logic Apps Managed Identity** | The authentication type to use for your connection. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
-| **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br><br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. |
-| **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br><br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **Show keys**. Copy and save one of the key values. |
-|||||
+ This example continues with the action named **Get blob content**.
-The following example shows how a connection using access key authentication might appear:
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-add-managed-action.png" alt-text="Screenshot showing Azure portal, Standard workflow designer, and Azure Blob Storage managed action selected.":::
+1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**.
-> [!NOTE]
-> After you create your connection, if you have a different existing Azure Blob storage connection
-> that you want to use instead, select **Change connection** in the trigger or action details editor.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for your connection |
+ | **Authentication type** | Yes | The [authentication type](../storage/common/authorize-data-access.md) for your storage account. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
-If you have problems connecting to your storage account, review [how to access storage accounts behind firewalls](#access-storage-accounts-behind-firewalls).
+ For example, this connection uses access key authentication and provides the access key value for the storage account along with the following property values:
-### [Standard](#tab/standard)
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. |
+ | **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Show**. Copy and save the primary key value. |
-Before you can configure your [Azure Blob trigger](#add-trigger) or [Azure Blob action](#add-action), you need to connect to your Azure Storage account. A connection requires the following properties:
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-action-create-connection.png" alt-text="Screenshot showing Standard workflow, Azure Blob Storage managed action, and example connection information.":::
-| Property | Required | Value | Description |
-|-|-|-|-|
-| **Connection name** | Yes | <*connection-name*> | The name to use for your connection. |
-| **Azure Blob Storage Connection String** | Yes | <*storage-account*> | Select your storage account from the list, or provide a string. <br><br><br><br>**Note**: To find the connection string, go to the storage account's page. In the navigation menu, under **Security + networking**, select **Access keys** > **Show keys**. Copy one of the available connection string values. |
-|||||
+1. After the action information box appears, provide the necessary information.
-To create an Azure Blob Storage connection from a logic app workflow in single-tenant Azure Logic Apps, follow these steps:
+ For example, in the **Get blob content** action, provide your storage account name. For the **Blob** property value, select the folder icon to browse for your storage container or folder. Or, enter the path manually.
-1. For **Connection name**, enter a name for your connection.
+ | Task | Blob path syntax |
+ |||
+ | Get the content from a specific blob in the root folder. | **/<*container-name*>/<*blob-name*>** |
+ | Get the content from a specific blob in a subfolder. | **/<*container-name*>/<*subfolder*>/<*blob-name*>** |
+ |||
-1. For **Azure Blob Storage Connection String**, enter the connection string for the storage account that you want to use.
+ The following example shows the action setup that gets the content from a blob in the root folder:
+
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-action-root-folder.png" alt-text="Screenshot showing Consumption logic app workflow designer with Blob action setup for root folder.":::
-1. Select **Create** to finish creating your connection.
+ The following example shows the action setup that gets the content from a blob in the subfolder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-connection-create.png" alt-text="Screenshot that shows the workflow designer with a Standard logic app workflow and a prompt to add a new connection for the Azure Blob Storage step.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-action-sub-folder.png" alt-text="Screenshot showing Consumption logic app workflow designer with Blob action setup for subfolder.":::
-> [!NOTE]
-> After you create your connection, if you have a different existing Azure Blob storage connection
-> that you want to use instead, select **Change connection** in the trigger or action details editor.
+1. Add any other actions that your workflow requires.
-If you have problems connecting to your storage account, review [how to access storage accounts behind firewalls](#access-storage-accounts-behind-firewalls).
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
+<a name="built-in-connector-operations"></a>
+
+## Azure Blob built-in connector operations
+
+The Azure Blob built-in connector is available only for Standard logic app workflows and provides the following operations:
+
+| Trigger | Description |
+||-|
+| When a blob is added or updated | Start a logic app workflow when a blob is added or updated in your storage container. |
+
+| Action | Description |
+|--|-|
+| Check whether blob exists | Check whether the specified blob exists in the specified Azure storage container. |
+| Delete blob | Delete the specified blob from the specified Azure storage container. |
+| Get blob metadata using path | Get the metadata for the specified blob from the specified Azure storage container. |
+| Get container metadata using path | Get the metadata for the specified Azure storage container. |
+| Get blob SAS URI using path | Get the Shared Access Signature (SAS) URI for the specified blob in the specified Azure storage container. |
+| List all blobs using path | List all the blobs in the specified Azure storage container. |
+| List all containers using path or root path | List all the Azure storage containers in your Azure subscription. |
+| Read blob content | Read the content from the specified blob in the specified Azure storage container. |
+| Upload blob to storage container | Upload the specified blob to the specified Azure storage container. |
+ ## Access storage accounts behind firewalls You can add network security to an Azure storage account by [restricting access with a firewall and firewall rules](../storage/common/storage-network-security.md). However, this setup creates a challenge for Azure and other Microsoft services that need access to the storage account. Local communication in the data center abstracts the internal IP addresses, so just permitting traffic through IP addresses might not be enough to successfully allow communication across the firewall. Based on which Azure Blob Storage connector you use, the following options are available:
You can add network security to an Azure storage account by [restricting access
- To access storage accounts behind firewalls using the ISE-versioned Azure Blob Storage connector that's only available in an ISE-based logic app, review [Access storage accounts through trusted virtual network](#access-storage-accounts-through-trusted-virtual-network). -- To access storage accounts behind firewalls using the *built-in* Azure Blob Storage connector that's only available in Standard logic apps, review [Access storage accounts through VNet integration](#access-storage-accounts-through-vnet-integration).
+- To access storage accounts behind firewalls using the *built-in* Azure Blob Storage connector that's only available in Standard logic apps, review [Access storage accounts through virtual network integration](#access-storage-accounts-through-virtual-network-integration).
### Access storage accounts in other regions
To add your outbound IP addresses to the storage account firewall, follow these
- Your logic app and storage account exist in different regions.
- You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
+ You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
-### Access storage accounts through VNet integration
+### Access storage accounts through virtual network integration
- Your logic app and storage account exist in the same region.
- You can put the storage account in an Azure virtual network by creating a private endpoint, and then add that virtual network to the trusted virtual networks list. To give your logic app access to the storage account, you have to [Set up outbound traffic using VNet integration](../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md#set-up-outbound) to enable connecting to resources in a virtual network. You can then add the VNet to the storage account's trusted virtual networks list.
+ You can put the storage account in an Azure virtual network by creating a private endpoint, and then add that virtual network to the trusted virtual networks list. To give your logic app access to the storage account, you have to [Set up outbound traffic using virtual network integration](../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md#set-up-outbound) to enable connecting to resources in a virtual network. You can then add the virtual network to the storage account's trusted virtual networks list.
- Your logic app and storage account exist in different regions.
- You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
+ You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
### Access Blob Storage in same region with system-managed identities
To use managed identities in your logic app to access Blob Storage, follow these
1. [Enable support for the managed identity in your logic app](#enable-managed-identity-support). > [!NOTE]
-> Limitations for this solution:
>
-> - To authenticate your storage account connection, you have to set up a system-assigned managed identity.
+> This solution has the following limitations:
+>
+> To authenticate your storage account connection, you have to set up a system-assigned managed identity.
> A user-assigned managed identity won't work.
->
#### Configure storage account access
Next, complete the following steps:
} ```
+## Application Insights errors
+
+- **404** and **409** errors
+
+ If your Standard workflow uses an Azure Blob built-in action that adds a blob to your storage container, you might get **404** and **409** errors in Application Insights for failed requests. These errors are expected because the connector checks whether the blob file exists before adding the blob. The errors result when the file doesn't exist. Despite these errors, the built-in action successfully adds the blob.
+ ## Next steps
-[Connectors overview for Azure Logic Apps](apis-list.md)
+- [Managed connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+- [Built-in connectors in Azure Logic Apps](built-in.md)
connectors Connectors Create Api Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-cosmos-db.md
Previously updated : 05/02/2022 Last updated : 08/23/2022 tags: connectors # Process and create Azure Cosmos DB documents using Azure Logic Apps + From your workflow in Azure Logic Apps, you can connect to Azure Cosmos DB and work with documents by using the [Azure Cosmos DB connector](/connectors/documentdb/). This connector provides triggers and actions that your workflow can use for Azure Cosmos DB operations. For example, actions include creating or updating, reading, querying, and deleting documents. You can connect to Azure Cosmos DB from both **Logic App (Consumption)** and **Logic App (Standard)** resource types by using the [*managed connector*](managed.md) operations. For **Logic App (Standard)**, Azure Cosmos DB also provides [*built-in*](built-in.md) operations, which are currently in preview and offer different functionality, better performance, and higher throughput. For example, if you're working with the **Logic App (Standard)** resource type, you can use the built-in trigger to respond to changes in an Azure Cosmos DB container. You can combine Azure Cosmos DB operations with other actions and triggers in your logic app workflows to enable scenarios such as event sourcing and general data processing.
connectors Connectors Create Api Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-ftp.md
tags: connectors
# Connect to an FTP server from workflows in Azure Logic Apps + This article shows how to access your File Transfer Protocol (FTP) server from a workflow in Azure Logic Apps with the FTP connector. You can then create automated workflows that run when triggered by events in your FTP server or in other systems and run actions to manage files on your FTP server. For example, your workflow can start with an FTP trigger that monitors and responds to events on your FTP server. The trigger makes the outputs available to subsequent actions in your workflow. Your workflow can run FTP actions that create, send, receive, and manage files through your FTP server account using the following specific tasks:
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md
tags: connectors
# Connect to an IBM MQ server from a workflow in Azure Logic Apps + The MQ connector helps you connect your logic app workflows to an IBM MQ server that's either on premises or in Azure. You can then have your workflows receive and send messages stored in your MQ server. This article provides a get started guide to using the MQ connector by showing how to connect to your MQ server and add an MQ action to your workflow. For example, you can start by browsing a single message in a queue and then try other actions. This connector includes a Microsoft MQ client that communicates with a remote MQ server across a TCP/IP network. You can connect to the following IBM WebSphere MQ versions:
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md
Last updated 09/02/2022
# Schedule and run recurring workflows with the Recurrence trigger in Azure Logic Apps + To start and run your workflow on a schedule, you can use the generic Recurrence trigger as the first step. You can set a date, time, and time zone for starting the workflow and a recurrence for repeating that workflow. The following list includes some patterns that this trigger supports along with more advanced recurrences and complex schedules: * Run at a specific date and time, then repeat every *n* number of seconds, minutes, hours, days, weeks, or months.
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
tags: connectors
# Handle incoming or inbound HTTPS requests sent to workflows in Azure Logic Apps + To run your logic app workflow after receiving an HTTPS request from another service, you can start your workflow with the Request built-in trigger. Your workflow can then respond to the HTTPS request by using Response built-in action. The following list describes some example tasks that your workflow can perform when you use the Request trigger and Response action:
container-instances Container Instances Encrypt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-encrypt-data.md
The rest of the document covers the steps required to encrypt your ACI deploymen
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
-This article reviews two flows for encrypting data with a customer-managed key:
-* Encrypt data with a customer-managed key stored in a standard Azure Key Vault
-* Encrypt data with a customer-managed key stored in a network-protected Azure Key Vault with [Trusted Services](../key-vault/general/network-security.md) enabled.
-
-## Encrypt data with a customer-managed key stored in a standard Azure Key Vault
- ### Create Service Principal for ACI The first step is to ensure that your [Azure tenant](../active-directory/develop/quickstart-create-new-tenant.md) has a service principal assigned for granting permissions to the Azure Container Instances service.
az deployment group create --resource-group myResourceGroup --template-file depl
Within a few seconds, you should receive an initial response from Azure. Once the deployment completes, all data related to it persisted by the ACI service will be encrypted with the key you provided.
-## Encrypt data with a customer-managed key in a network protected Azure Key Vault with Trusted Services enabled
-
-### Create a Key Vault resource
-
-Create an Azure Key Vault using [Azure portal](../key-vault/general/quick-create-portal.md), [Azure CLI](../key-vault/general/quick-create-cli.md), or [Azure PowerShell](../key-vault/general/quick-create-powershell.md). To start, do not apply any network-limitations so we can add necessary keys to the vault. In subsequent steps, we will add network-limitations and enable trusted services.
-
-For the properties of your key vault, use the following guidelines:
-* Name: A unique name is required.
-* Subscription: Choose a subscription.
-* Under Resource Group, either choose an existing resource group, or create new and enter a resource group name.
-* In the Location pull-down menu, choose a location.
-* You can leave the other options to their defaults or pick based on additional requirements.
-
-> [!IMPORTANT]
-> When using customer-managed keys to encrypt an ACI deployment template, it is recommended that the following two properties be set on the key vault, Soft Delete and Do Not Purge. These properties are not enabled by default, but can be enabled using either PowerShell or Azure CLI on a new or existing key vault.
-
-### Generate a new key
-
-Once your key vault is created, navigate to the resource in Azure portal. On the left navigation menu of the resource blade, under Settings, click **Keys**. On the view for "Keys," click "Generate/Import" to generate a new key. Use any unique Name for this key, and any other preferences based on your requirements. Make sure to capture key name and version for subsequent steps.
-
-![Screenshot of key creation settings, PNG.](./media/container-instances-encrypt-data/generate-key.png)
-
-### Create a user-assigned managed identity for your container group
-Create an identity in your subscription using the [az identity create](/cli/azure/identity#az-identity-create) command. You can use the same resource group used to create the key vault, or use a different one.
-
-```azurecli-interactive
-az identity create \
- --resource-group myResourceGroup \
- --name myACIId
-```
-
-To use the identity in the following steps, use the [az identity show](/cli/azure/identity#az-identity-show) command to store the identity's service principal ID and resource ID in variables.
-
-```azurecli-interactive
-# Get service principal ID of the user-assigned identity
-spID=$(az identity show \
- --resource-group myResourceGroup \
- --name myACIId \
- --query principalId --output tsv)
-```
-
-### Set access policy
-
-Create a new access policy for allowing the user-assigned identity to access and unwrap your key for encryption purposes.
-
-```azurecli-interactive
-az keyvault set-policy \
- --name mykeyvault \
- --resource-group myResourceGroup \
- --object-id $spID \
- --key-permissions get unwrapKey
- ```
-
-### Modify Azure Key Vault's network permissions
-The following commands set up an Azure Firewall for your Azure Key Vault and allow Azure Trusted Services such as ACI access.
-
-```azurecli-interactive
-az keyvault update \
- --name mykeyvault \
- --resource-group myResourceGroup \
- --default-action Deny
- ```
-
-```azurecli-interactive
-az keyvault update \
- --name mykeyvault \
- --resource-group myResourceGroup \
- --bypass AzureServices
- ```
-
-### Modify your JSON deployment template
-
-> [!IMPORTANT]
-> Encrypting deployment data with a customer-managed key is available in the 2022-09-01 API version or newer. The 2022-09-01 API version is only available via ARM or REST. If you have any issues with this, please reach out to Azure Support.
-
-Once the key vault key and access policy are set up, add the following properties to your ACI deployment template. Learn more about deploying ACI resources with a template in the [Tutorial: Deploy a multi-container group using a Resource Manager template](./container-instances-multi-container-group.md).
-* Under `resources`, set `apiVersion` to `2022-09-01`.
-* Under the container group properties section of the deployment template, add an `encryptionProperties`, which contains the following values:
- * `vaultBaseUrl`: the DNS Name of your key vault. This can be found on the overview blade of the key vault resource in Portal
- * `keyName`: the name of the key generated earlier
- * `keyVersion`: the current version of the key. This can be found by clicking into the key itself (under "Keys" in the Settings section of your key vault resource)
- * `identity`: this is the resource URI of the Managed Identity instance created earlier
-* Under the container group properties, add a `sku` property with value `Standard`. The `sku` property is required in API version 2022-09-01.
-* Under resources, add the `identity` object required to use Managed Identity with ACI, which contains the following values:
- * `type`: the type of the identity being used (either user-assigned or system-assigned). This case will be set to "UserAssigned"
- * `userAssignedIdentities`: the resourceURI of the same user-assigned identity used above in the `encryptionProperties` object.
-
-The following template snippet shows these additional properties to encrypt deployment data:
-
-```json
-[...]
-"resources": [
- {
- "name": "[parameters('containerGroupName')]",
- "type": "Microsoft.ContainerInstance/containerGroups",
- "apiVersion": "2019-12-01",
- "location": "[resourceGroup().location]",
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/XXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACIId": {}
- }
- },
- "properties": {
- "encryptionProperties": {
- "vaultBaseUrl": "https://example.vault.azure.net",
- "keyName": "acikey",
- "keyVersion": "xxxxxxxxxxxxxxxx",
- "identity": "/subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/XXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACIId"
- },
- "sku": "Standard",
- "containers": {
- [...]
- }
- }
- }
-]
-```
-
-Following is a complete template, adapted from the template in [Tutorial: Deploy a multi-container group using a Resource Manager template](./container-instances-multi-container-group.md).
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "containerGroupName": {
- "type": "string",
- "defaultValue": "myContainerGroup",
- "metadata": {
- "description": "Container Group name."
- }
- }
- },
- "variables": {
- "container1name": "aci-tutorial-app",
- "container1image": "mcr.microsoft.com/azuredocs/aci-helloworld:latest",
- "container2name": "aci-tutorial-sidecar",
- "container2image": "mcr.microsoft.com/azuredocs/aci-tutorial-sidecar"
- },
- "resources": [
- {
- "name": "[parameters('containerGroupName')]",
- "type": "Microsoft.ContainerInstance/containerGroups",
- "apiVersion": "2022-09-01",
- "location": "[resourceGroup().location]",
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/XXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACIId": {}
- }
- },
- "properties": {
- "encryptionProperties": {
- "vaultBaseUrl": "https://example.vault.azure.net",
- "keyName": "acikey",
- "keyVersion": "xxxxxxxxxxxxxxxx",
- "identity": "/subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/XXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACIId"
- },
- "sku": "Standard",
- "containers": [
- {
- "name": "[variables('container1name')]",
- "properties": {
- "image": "[variables('container1image')]",
- "resources": {
- "requests": {
- "cpu": 1,
- "memoryInGb": 1.5
- }
- },
- "ports": [
- {
- "port": 80
- },
- {
- "port": 8080
- }
- ]
- }
- },
- {
- "name": "[variables('container2name')]",
- "properties": {
- "image": "[variables('container2image')]",
- "resources": {
- "requests": {
- "cpu": 1,
- "memoryInGb": 1.5
- }
- }
- }
- }
- ],
- "osType": "Linux",
- "ipAddress": {
- "type": "Public",
- "ports": [
- {
- "protocol": "tcp",
- "port": "80"
- },
- {
- "protocol": "tcp",
- "port": "8080"
- }
- ]
- }
- }
- }
- ],
- "outputs": {
- "containerIPv4Address": {
- "type": "string",
- "value": "[reference(resourceId('Microsoft.ContainerInstance/containerGroups/', parameters('containerGroupName'))).ipAddress.ip]"
- }
- }
-}
-```
-
-### Deploy your resources
-
-If you created and edited the template file on your desktop, you can upload it to your Cloud Shell directory by dragging the file into it.
-
-Create a resource group with the [az group create][az-group-create] command.
-
-```azurecli-interactive
-az group create --name myResourceGroup --location eastus
-```
-
-Deploy the template with the [az deployment group create][az-deployment-group-create] command.
-
-```azurecli-interactive
-az deployment group create --resource-group myResourceGroup --template-file deployment-template.json
-```
-
-Within a few seconds, you should receive an initial response from Azure. Once the deployment completes, all data related to it persisted by the ACI service will be encrypted with the key you provided.
<!-- LINKS - Internal --> [az-group-create]: /cli/azure/group#az_group_create [az-deployment-group-create]: /cli/azure/deployment/group/#az_deployment_group_create
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
Azure Container Instances supports both types of managed Azure identities: user-
To use a managed identity, the identity must be granted access to one or more Azure service resources (such as a web app, a key vault, or a storage account) in the subscription. Using a managed identity in a running container is similar to using an identity in an Azure VM. See the VM guidance for using a [token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md), [Azure PowerShell or Azure CLI](../active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md), or the [Azure SDKs](../active-directory/managed-identities-azure-resources/how-to-use-vm-sdk.md).
-### Limitations
-
-* Currently you can't use a managed identity in a container group deployed to a virtual network.
- [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] - This article requires version 2.0.49 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
container-instances Container Instances Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-region-availability.md
The following regions and maximum resources are available to container groups wi
| Australia East | 4 | 16 | 4 | 16 | 50 | N/A | Y | | Australia Southeast | 4 | 14 | N/A | N/A | 50 | N/A | N | | Brazil South | 4 | 16 | 2 | 8 | 50 | N/A | Y |
+| Australia Southeast | 4 | 14 | 16 | 50 | 50 | N/A | N |
+| Brazil South | 4 | 16 | 2 | 16 | 50 | N/A | Y |
| Canada Central | 4 | 16 | 4 | 16 | 50 | N/A | N | | Canada East | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| Canada East | 4 | 16 | 16 | 50 | 50 | N/A | N |
| Central India | 4 | 16 | 4 | 4 | 50 | V100 | N | | Central US | 4 | 16 | 4 | 16 | 50 | N/A | Y | | East Asia | 4 | 16 | 4 | 16 | 50 | N/A | N |
The following regions and maximum resources are available to container groups wi
| East US 2 | 4 | 16 | 4 | 16 | 50 | N/A | Y | | France Central | 4 | 16 | 4 | 16 | 50 | N/A | Y| | Germany West Central | 4 | 16 | N/A | N/A | 50 | N/A | Y |
+| Germany West Central | 4 | 16 | 16 | 50 | 50 | N/A | Y |
| Japan East | 4 | 16 | 4 | 16 | 50 | N/A | Y | | Japan West | 4 | 16 | N/A | N/A | 50 | N/A | N | | Jio India West | 4 | 16 | N/A | N/A | 50 | N/A | N | | Korea Central | 4 | 16 | N/A | N/A | 50 | N/A | N | | North Central US | 2 | 3.5 | 4 | 16 | 50 | K80, P100, V100 | N |
+| Japan West | 4 | 16 | 16 | 50 | 50 | N/A | N |
+| Jio India West | 4 | 16 | 16 | 50 | 50 | N/A | N |
+| Korea Central | 4 | 16 | 16 | 50 | 50 | N/A | N |
+| North Central US | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | N |
| North Europe | 4 | 16 | 4 | 16 | 50 | K80 | Y | | Norway East | 4 | 16 | N/A | N/A | 50 | N/A | N | | Norway West | 4 | 16 | N/A | N/A | 50 | N/A | N | | South Africa North | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| Norway East | 4 | 16 | 4 | 16 | 50 | N/A | N |
+| Norway West | 4 | 16 | 4 | 16 | 50 | N/A | N |
+| South Africa North | 4 | 16 | 4 | 16 | 50 | N/A | N |
| South Central US | 4 | 16 | 4 | 16 | 50 | V100 | Y | | Southeast Asia | 4 | 16 | 4 | 16 | 50 | P100, V100 | Y | | South India | 4 | 16 | N/A | N/A | 50 | K80 | N | | Sweden Central | 4 | 16 | N/A | N/A | 50 | N/A | N | | Sweden South | 4 | 16 | N/A | N/A | 50 | N/A | N | | Switzerland North | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| South India | 4 | 16 | 4 | 16 | 50 | K80 | N |
+| Sweden Central | 4 | 16 | 4 | 16 | 50 | N/A | N |
+| Sweden South | 4 | 16 | 4 | 16 | 50 | N/A | N |
+| Switzerland North | 4 | 16 | 4 | 16 | 50 | N/A | N |
| Switzerland West | 4 | 16 | N/A | N/A | 50 | N/A | N | | UK South | 4 | 16 | 4 | 16 | 50 | N/A | Y| | UK West | 4 | 16 | N/A | N/A | 50 | N/A | N | | UAE North | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| UK West | 4 | 16 | 4 | 16 | 50 | N/A | N |
+| UAE North | 4 | 16 | 4 | 16 | 50 | N/A | N |
| West Central US| 4 | 16 | 4 | 16 | 50 | N/A | N | | West Europe | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | | West India | 4 | 16 | N/A | N/A | 50 | N/A | N | | West US | 4 | 16 | 4 | 16 | 50 | N/A | N | | West US 2 | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | | West US 3 | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| West US 3 | 4 | 16 | 4 | 16 | 50 | N/A | N |
The following maximum resources are available to a container group deployed with [GPU resources](container-instances-gpu.md) (preview).
container-instances Container Instances Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-troubleshooting.md
On initial creation, Windows containers may have no inbound or outbound connecti
### Cannot connect to underlying Docker API or run privileged containers
-Azure Container Instances does not expose direct access to the underlying infrastructure that hosts container groups. This includes access to the Docker API running on the container's host and running privileged containers. If you require Docker interaction, check the [REST reference documentation](/rest/api/container-instances/) to see what the ACI API supports. If there is something missing, submit a request on the [ACI feedback forums](https://aka.ms/aci/feedback).
+Azure Container Instances does not expose direct access to the underlying infrastructure that hosts container groups. This includes access to the container runtime, orchestration technology, and running privileged container operations. To see what operations are supported by ACI, check the [REST reference documentation](/rest/api/container-instances/). If there is something missing, submit a request on the [ACI feedback forums](https://aka.ms/aci/feedback).
### Container group IP address may not be accessible due to mismatched ports
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-virtual-network-concepts.md
Container groups deployed into an Azure virtual network enable scenarios like:
* Currently, only Linux containers are supported in a container group deployed to a virtual network. * To deploy container groups to a subnet, the subnet can't contain other resource types. Remove all existing resources from an existing subnet prior to deploying container groups to it, or create a new subnet. * To deploy container groups to a subnet, the subnet and the container group must be on the same Azure subscription.
-* You can't use a [managed identity](container-instances-managed-identity.md) in a container group deployed to a virtual network.
* You can't enable a [liveness probe](container-instances-liveness-probe.md) or [readiness probe](container-instances-readiness-probe.md) in a container group deployed to a virtual network. * Due to the additional networking resources involved, deployments to a virtual network are typically slower than deploying a standard container instance. * Outbound connection to port 25 is not supported at this time.
In the following diagram, several container groups have been deployed to a subne
<!-- LINKS - Internal --> [az-container-create]: /cli/azure/container#az_container_create
-[az-network-profile-list]: /cli/azure/network/profile#az_network_profile_list
+[az-network-profile-list]: /cli/azure/network/profile#az_network_profile_list
container-instances Using Azure Container Registry Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/using-azure-container-registry-mi.md
**Azure CLI**: The command-line examples in this article use the [Azure CLI](/cli/azure/) and are formatted for the Bash shell. You can [install the Azure CLI](/cli/azure/install-azure-cli) locally, or use the [Azure Cloud Shell][cloud-shell-bash]. ## Limitations
-* Container groups running in Azure Virtual Networks don't support managed identity authentication image pulls with ACR.
- * Windows containers don't support managed identity-authenticated image pulls with ACR. * The Azure container registry must have [Public Access set to either 'Select networks' or 'None'](../container-registry/container-registry-access-selected-networks.md). To set the Azure container registry's Public Access to 'All networks', visit ACI's article on [how to authenticate with ACR with service principal based authentication](container-instances-using-azure-container-registry.md).
data-factory Better Understand Different Integration Runtime Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/better-understand-different-integration-runtime-charges.md
In this article, we'll illustrate the pricing model using different integration
The integration runtime, which is serverless in Azure and self-hosted in hybrid scenarios, provides the compute resources used to execute the activities in a pipeline. Integration runtime charges are prorated by the minute and rounded up. > [!NOTE]
-> The prices used in these examples below are hypothetical and are not intended to imply actual pricing.
+> The prices used in this example below are hypothetical and are not intended to imply actual pricing.
## Azure integration runtime
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
Settings specific to Azure SQL Database are available in the **Source Options**
**Incremental date column**: When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table.
+**Enable native change data capture(Preview)**: Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](https://learn.microsoft.com/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](https://learn.microsoft.com/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on Azure SQL DB before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture).
+ **Start reading from beginning**: Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on. ### Sink transformation
When you copy data from/to Azure SQL Database with [Always Encrypted](/sql/relat
>[!NOTE] > Currently, Azure SQL Database [**Always Encrypted**](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&preserve-view=true) is only supported for source transformation in mapping data flows.
+## Native change data capture
+
+Azure Data Factory can support native change data capture capabilities for SQL Server, Azure SQL DB and Azure SQL MI. The changed data including row insert, update and deletion in SQL stores can be automatically detected and extracted by ADF mapping dataflow. With the no code experience in mapping dataflow, users can easily achieve data replication scenario from SQL stores by appending a database as destination store. What is more, users can also compose any data transform logic in between to achieve incremental ETL scenario from SQL stores.
+
+Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can be recorded by ADF for you to get changed data from the last run automatically. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run. If you do want to change the pipeline name or activity name but still keep the checkpoint to get changed data from the last run automatically, please use your own Checkpoint key in dataflow activity to achieve that.
+
+When you debug the pipeline, this feature works the same. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline. At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now on.
+
+In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured from the previous checkpoint of your selected pipeline run.
+
+### Example 1:
+
+When you directly chain a source transform referenced to SQL CDC enabled dataset with a sink transform referenced to a database in a mapping dataflow, the changes happened on SQL source will be automatically applied to the target database, so that you will easily get data replication scenario between databases. You can use update method in sink transform to select whether you want to allow insert, allow update or allow delete on target database. The example script in mapping dataflow is as below.
+
+```json
+source(output(
+ id as integer,
+ name as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ enableNativeCdc: true,
+ netChanges: true,
+ skipInitialLoad: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ format: 'table') ~> source1
+source1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ deletable:true,
+ insertable:true,
+ updateable:true,
+ upsertable:true,
+ keys:['id'],
+ format: 'table',
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true,
+ errorHandlingOption: 'stopOnFirstError') ~> sink1
+```
+
+### Example 2:
+
+If you want to enable ETL scenario instead of data replication between database via SQL CDC, you can use expressions in mapping dataflow including isInsert(1), isUpdate(1) and isDelete(1) to differentiate the rows with different operation types. The following is one of the example scripts for mapping dataflow on deriving one column with the value: 1 to indicate inserted rows, 2 to indicate updated rows and 3 to indicate deleted rows for downstream transforms to process the delta data.
+
+```json
+source(output(
+ id as integer,
+ name as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ enableNativeCdc: true,
+ netChanges: true,
+ skipInitialLoad: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ format: 'table') ~> source1
+source1 derive(operationType = iif(isInsert(1), 1, iif(isUpdate(1), 2, 3))) ~> derivedColumn1
+derivedColumn1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> sink1
+```
+
+### Known limitation:
+
+* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](https://learn.microsoft.com/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
++ ## Next steps For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-managed-instance.md
The below table lists the properties supported by Azure SQL Managed Instance sou
| Query | If you select Query as input, specify a SQL query to fetch data from source, which overrides any table you specify in dataset. Using queries is a great way to reduce rows for testing or lookups.<br><br>**Order By** clause is not supported, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table that you can use in data flow.<br>Query example: `Select * from MyTable where customerId > 1000 and customerId < 2000`| No | String | query | | Batch size | Specify a batch size to chunk large data into reads. | No | Integer | batchSize | | Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel |
+| Enable incremental extract | Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed. | No | - |- |
+| Incremental date column | When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table. | No | - |- |
+| Enable native change data capture(Preview) | Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](https://learn.microsoft.com/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](https://learn.microsoft.com/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on Azure SQL MI before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture). | No | - |- |
+| Start reading from beginning | Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on. | No | - |- |
+ > [!TIP] > The [common table expression (CTE)](/sql/t-sql/queries/with-common-table-expression-transact-sql?view=sql-server-ver15&preserve-view=true) in SQL is not supported in the mapping data flow **Query** mode, because the prerequisite of using this mode is that queries can be used in the SQL query FROM clause but CTEs cannot do this.
When you copy data from/to SQL Managed Instance with [Always Encrypted](/sql/rel
>[!NOTE] >Currently, SQL Managed Instance [**Always Encrypted**](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&preserve-view=true) is only supported for source transformation in mapping data flows. +
+## Native change data capture
+
+Azure Data Factory can support native change data capture capabilities for SQL Server, Azure SQL DB and Azure SQL MI. The changed data including row insert, update and deletion in SQL stores can be automatically detected and extracted by ADF mapping dataflow. With the no code experience in mapping dataflow, users can easily achieve data replication scenario from SQL stores by appending a database as destination store. What is more, users can also compose any data transform logic in between to achieve incremental ETL scenario from SQL stores.
+
+Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can be recorded by ADF for you to get changed data from the last run automatically. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run. If you do want to change the pipeline name or activity name but still keep the checkpoint to get changed data from the last run automatically, please use your own Checkpoint key in dataflow activity to achieve that.
+
+When you debug the pipeline, this feature works the same. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline. At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now on.
+
+In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured from the previous checkpoint of your selected pipeline run.
+
+### Example 1:
+
+When you directly chain a source transform referenced to SQL CDC enabled dataset with a sink transform referenced to a database in a mapping dataflow, the changes happened on SQL source will be automatically applied to the target database, so that you will easily get data replication scenario between databases. You can use update method in sink transform to select whether you want to allow insert, allow update or allow delete on target database. The example script in mapping dataflow is as below.
+
+```json
+source(output(
+ id as integer,
+ name as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ enableNativeCdc: true,
+ netChanges: true,
+ skipInitialLoad: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ format: 'table') ~> source1
+source1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ deletable:true,
+ insertable:true,
+ updateable:true,
+ upsertable:true,
+ keys:['id'],
+ format: 'table',
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true,
+ errorHandlingOption: 'stopOnFirstError') ~> sink1
+```
+
+### Example 2:
+
+If you want to enable ETL scenario instead of data replication between database via SQL CDC, you can use expressions in mapping dataflow including isInsert(1), isUpdate(1) and isDelete(1) to differentiate the rows with different operation types. The following is one of the example scripts for mapping dataflow on deriving one column with the value: 1 to indicate inserted rows, 2 to indicate updated rows and 3 to indicate deleted rows for downstream transforms to process the delta data.
+
+```json
+source(output(
+ id as integer,
+ name as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ enableNativeCdc: true,
+ netChanges: true,
+ skipInitialLoad: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ format: 'table') ~> source1
+source1 derive(operationType = iif(isInsert(1), 1, iif(isUpdate(1), 2, 3))) ~> derivedColumn1
+derivedColumn1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> sink1
+```
+
+### Known limitation:
+
+* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](https://learn.microsoft.com/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
++ ## Next steps For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md
The below table lists the properties supported by SQL Server source. You can edi
| Query | If you select Query as input, specify a SQL query to fetch data from source, which overrides any table you specify in dataset. Using queries is a great way to reduce rows for testing or lookups.<br><br>**Order By** clause is not supported, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table that you can use in data flow.<br>Query example: `Select * from MyTable where customerId > 1000 and customerId < 2000`| No | String | query | | Batch size | Specify a batch size to chunk large data into reads. | No | Integer | batchSize | | Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel |
+| Enable incremental extract | Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed. | No | - |- |
+| Incremental date column | When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table. | No | - |- |
+| Enable native change data capture(Preview) | Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](https://learn.microsoft.com/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](https://learn.microsoft.com/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on SQL Server before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture). | No | - |- |
+| Start reading from beginning | Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on. | No | - |- |
+++ > [!TIP] > The [common table expression (CTE)](/sql/t-sql/queries/with-common-table-expression-transact-sql?view=sql-server-ver15&preserve-view=true) in SQL is not supported in the mapping data flow **Query** mode, because the prerequisite of using this mode is that queries can be used in the SQL query FROM clause but CTEs cannot do this.
When you copy data from/to SQL Server with [Always Encrypted](/sql/relational-da
>[!NOTE] >Currently, SQL Server [**Always Encrypted**](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&preserve-view=true) is only supported for source transformation in mapping data flows. +
+## Native change data capture
+
+Azure Data Factory can support native change data capture capabilities for SQL Server, Azure SQL DB and Azure SQL MI. The changed data including row insert, update and deletion in SQL stores can be automatically detected and extracted by ADF mapping dataflow. With the no code experience in mapping dataflow, users can easily achieve data replication scenario from SQL stores by appending a database as destination store. What is more, users can also compose any data transform logic in between to achieve incremental ETL scenario from SQL stores.
+
+Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can be recorded by ADF for you to get changed data from the last run automatically. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run. If you do want to change the pipeline name or activity name but still keep the checkpoint to get changed data from the last run automatically, please use your own Checkpoint key in dataflow activity to achieve that.
+
+When you debug the pipeline, this feature works the same. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline. At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now on.
+
+In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured from the previous checkpoint of your selected pipeline run.
+
+### Example 1:
+
+When you directly chain a source transform referenced to SQL CDC enabled dataset with a sink transform referenced to a database in a mapping dataflow, the changes happened on SQL source will be automatically applied to the target database, so that you will easily get data replication scenario between databases. You can use update method in sink transform to select whether you want to allow insert, allow update or allow delete on target database. The example script in mapping dataflow is as below.
+
+```json
+source(output(
+ id as integer,
+ name as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ enableNativeCdc: true,
+ netChanges: true,
+ skipInitialLoad: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ format: 'table') ~> source1
+source1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ deletable:true,
+ insertable:true,
+ updateable:true,
+ upsertable:true,
+ keys:['id'],
+ format: 'table',
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true,
+ errorHandlingOption: 'stopOnFirstError') ~> sink1
+```
+
+### Example 2:
+
+If you want to enable ETL scenario instead of data replication between database via SQL CDC, you can use expressions in mapping dataflow including isInsert(1), isUpdate(1) and isDelete(1) to differentiate the rows with different operation types. The following is one of the example scripts for mapping dataflow on deriving one column with the value: 1 to indicate inserted rows, 2 to indicate updated rows and 3 to indicate deleted rows for downstream transforms to process the delta data.
+
+```json
+source(output(
+ id as integer,
+ name as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ enableNativeCdc: true,
+ netChanges: true,
+ skipInitialLoad: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ format: 'table') ~> source1
+source1 derive(operationType = iif(isInsert(1), 1, iif(isUpdate(1), 2, 3))) ~> derivedColumn1
+derivedColumn1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> sink1
+```
+
+### Known limitation:
+
+* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](https://learn.microsoft.com/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
+ ## Troubleshoot connection issues 1. Configure your SQL Server instance to accept remote connections. Start **SQL Server Management Studio**, right-click **server**, and select **Properties**. Select **Connections** from the list, and select the **Allow remote connections to this server** check box.
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-expression-language-functions.md
Following on Example 3, this example finds the value in the
`xpath(xml(body('Http')), 'string(/*[name()=\"file\"]/*[name()=\"location\"])')` And returns this result: `"Paris"`
+
+> [!NOTE]
+> One can add comments to data flow expressions, but not in pipeline expressions.
## Next steps For a list of system variables you can use in expressions, see [System variables](control-flow-system-variables.md).
data-factory Data Flow Parse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-parse.md
Use the expression builder to set the source for your parsing. This can be as si
* Expression: ```(level as string, registration as long)``` * Source Nested JSON data: ```{"car" : {"model" : "camaro", "year" : 1989}, "color" : "white", "transmission" : "v8"}```
-* Expression: ```(car as (model as string, year as integer), color as string, transmission as string)```
+ * Expression: ```(car as (model as string, year as integer), color as string, transmission as string)```
* Source XML data: ```<Customers><Customer>122</Customer><CompanyName>Great Lakes Food Market</CompanyName></Customers>``` * Expression: ```(Customers as (Customer as integer, CompanyName as string))``` * Source XML with Attribute data: ```<cars><car model="camaro"><year>1989</year></car></cars>```
-* Expression: ```(cars as (car as ({@model} as string, year as integer)))```
+ * Expression: ```(cars as (car as ({@model} as string, year as integer)))```
+ * Note: If you run into errors extracting attributes (i.e. @model) from a complex type, a workaround is to convert the complex type to a string, remove the @ symbol (i.e. replace(toString(your_xml_string_parsed_column_name.cars.car),'@','') ), and then use the parse JSON transformation activity.
data-factory Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/plan-manage-costs.md
Use the [ADF pricing calculator](https://azure.microsoft.com/pricing/calculator/
One of the commonly asked questions for the pricing calculator is what values should be used as inputs. During the proof-of-concept phase, you can conduct trial runs using sample datasets to understand the consumption for various ADF meters. Then based on the consumption for the sample dataset, you can project out the consumption for the full dataset and operationalization schedule. > [!NOTE]
-> The prices used in these examples below are hypothetical and are not intended to imply actual pricing.
+> The prices used in this example below are hypothetical and are not intended to imply actual pricing.
For example, letΓÇÖs say you need to move 1 TB of data daily from AWS S3 to Azure Data Lake Gen2. You can perform POC of moving 100 GB of data to measure the data ingestion throughput and understand the corresponding billing consumption.
Budgets can be created with filters for specific resources or services in Azure
## Export cost data
-You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
## Next steps
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-concepts.md
Previously updated : 08/18/2022 Last updated : 09/22/2022 # Understanding Data Factory pricing through examples
Last updated 08/18/2022
This article explains and demonstrates the Azure Data Factory pricing model with detailed examples. You can also refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
-> [!NOTE]
-> The prices used in these examples below are hypothetical and are not intended to imply actual pricing.
+For more details about pricing in Azure Data Factory, refer to the [Data Pipeline Pricing and FAQ](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/).
-## Copy data from AWS S3 to Azure Blob storage hourly
+## Pricing examples
+The prices used in these examples below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
-In this scenario, you want to copy data from AWS S3 to Azure Blob storage on an hourly schedule.
-
-To accomplish the scenario, you need to create a pipeline with the following items:
-
-1. A copy activity with an input dataset for the data to be copied from AWS S3.
-
-2. An output dataset for the data on Azure Storage.
-
-3. A schedule trigger to execute the pipeline every hour.
-
- :::image type="content" source="media/pricing-concepts/scenario1.png" alt-text="Diagram shows a pipeline with a schedule trigger. In the pipeline, copy activity flows to an input dataset, which flows to an A W S S3 linked service and copy activity also flows to an output dataset, which flows to an Azure Storage linked service.":::
-
-| **Operations** | **Types and Units** |
-| | |
-| Create Linked Service | 2 Read/Write entity |
-| Create Datasets | 4 Read/Write entities (2 for dataset creation, 2 for linked service references) |
-| Create Pipeline | 3 Read/Write entities (1 for pipeline creation, 2 for dataset references) |
-| Get Pipeline | 1 Read/Write entity |
-| Run Pipeline | 2 Activity runs (1 for trigger run, 1 for activity runs) |
-| Copy Data Assumption: execution time = 10 min | 10 \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
-| Monitor Pipeline Assumption: Only 1 run occurred | 2 Monitoring run records retrieved (1 for pipeline run, 1 for activity run) |
-
-**Total Scenario pricing: $0.16811**
--- Data Factory Operations = **$0.0001**
- - Read/Write = 10\*0.00001 = $0.0001 [1 R/W = $0.50/50000 = 0.00001]
- - Monitoring = 2\*0.000005 = $0.00001 [1 Monitoring = $0.25/50000 = 0.000005]
-- Pipeline Orchestration &amp; Execution = **$0.168**
- - Activity Runs = 0.001\*2 = $0.002 [1 run = $1/1000 = 0.001]
- - Data Movement Activities = $0.166 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
-
-## Copy data and transform with Azure Databricks hourly
-
-In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform the data with Azure Databricks on an hourly schedule.
-
-To accomplish the scenario, you need to create a pipeline with the following items:
-
-1. One copy activity with an input dataset for the data to be copied from AWS S3, and an output dataset for the data on Azure storage.
-2. One Azure Databricks activity for the data transformation.
-3. One schedule trigger to execute the pipeline every hour.
--
-| **Operations** | **Types and Units** |
-| | |
-| Create Linked Service | 3 Read/Write entity |
-| Create Datasets | 4 Read/Write entities (2 for dataset creation, 2 for linked service references) |
-| Create Pipeline | 3 Read/Write entities (1 for pipeline creation, 2 for dataset references) |
-| Get Pipeline | 1 Read/Write entity |
-| Run Pipeline | 3 Activity runs (1 for trigger run, 2 for activity runs) |
-| Copy Data Assumption: execution time = 10 min | 10 \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
-| Monitor Pipeline Assumption: Only 1 run occurred | 3 Monitoring run records retrieved (1 for pipeline run, 2 for activity run) |
-| Execute Databricks activity Assumption: execution time = 10 min | 10 min External Pipeline Activity Execution |
-
-**Total Scenario pricing: $0.16916**
--- Data Factory Operations = **$0.00012**
- - Read/Write = 11\*0.00001 = $0.00011 [1 R/W = $0.50/50000 = 0.00001]
- - Monitoring = 3\*0.000005 = $0.00001 [1 Monitoring = $0.25/50000 = 0.000005]
-- Pipeline Orchestration &amp; Execution = **$0.16904**
- - Activity Runs = 0.001\*3 = $0.003 [1 run = $1/1000 = 0.001]
- - Data Movement Activities = $0.166 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
- - External Pipeline Activity = $0.000041 (Prorated for 10 minutes of execution time. $0.00025/hour on Azure Integration Runtime)
-
-## Copy data and transform with dynamic parameters hourly
-
-In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform with Azure Databricks (with dynamic parameters in the script) on an hourly schedule.
-
-To accomplish the scenario, you need to create a pipeline with the following items:
-
-1. One copy activity with an input dataset for the data to be copied from AWS S3, an output dataset for the data on Azure storage.
-2. One Lookup activity for passing parameters dynamically to the transformation script.
-3. One Azure Databricks activity for the data transformation.
-4. One schedule trigger to execute the pipeline every hour.
--
-| **Operations** | **Types and Units** |
-| | |
-| Create Linked Service | 3 Read/Write entity |
-| Create Datasets | 4 Read/Write entities (2 for dataset creation, 2 for linked service references) |
-| Create Pipeline | 3 Read/Write entities (1 for pipeline creation, 2 for dataset references) |
-| Get Pipeline | 1 Read/Write entity |
-| Run Pipeline | 4 Activity runs (1 for trigger run, 3 for activity runs) |
-| Copy Data Assumption: execution time = 10 min | 10 \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
-| Monitor Pipeline Assumption: Only 1 run occurred | 4 Monitoring run records retrieved (1 for pipeline run, 3 for activity run) |
-| Execute Lookup activity Assumption: execution time = 1 min | 1 min Pipeline Activity execution |
-| Execute Databricks activity Assumption: execution time = 10 min | 10 min External Pipeline Activity execution |
-
-**Total Scenario pricing: $0.17020**
--- Data Factory Operations = **$0.00013**
- - Read/Write = 11\*0.00001 = $0.00011 [1 R/W = $0.50/50000 = 0.00001]
- - Monitoring = 4\*0.000005 = $0.00002 [1 Monitoring = $0.25/50000 = 0.000005]
-- Pipeline Orchestration &amp; Execution = **$0.17007**
- - Activity Runs = 0.001\*4 = $0.004 [1 run = $1/1000 = 0.001]
- - Data Movement Activities = $0.166 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
- - Pipeline Activity = $0.00003 (Prorated for 1 minute of execution time. $0.005/hour on Azure Integration Runtime)
- - External Pipeline Activity = $0.000041 (Prorated for 10 minutes of execution time. $0.00025/hour on Azure Integration Runtime)
-
-## Run SSIS packages on Azure-SSIS integration runtime
-
-Azure-SSIS integration runtime (IR) is a specialized cluster of Azure virtual machines (VMs) for SSIS package executions in Azure Data Factory (ADF). When you provision it, it will be dedicated to you, hence it will be charged just like any other dedicated Azure VMs as long as you keep it running, regardless whether you use it to execute SSIS packages or not. With respect to its running cost, youΓÇÖll see the hourly estimate on its setup pane in ADF portal, for example:
--
-In the above example, if you keep your Azure-SSIS IR running for 2 hours, you'll be charged: **2 (hours) x US$1.158/hour = US$2.316**.
-
-To manage your Azure-SSIS IR running cost, you can scale down your VM size, scale in your cluster size, bring your own SQL Server license via Azure Hybrid Benefit (AHB) option that offers significant savings, see [Azure-SSIS IR pricing](https://azure.microsoft.com/pricing/details/data-factory/ssis/), and or start & stop your Azure-SSIS IR whenever convenient/on demand/just in time to process your SSIS workloads, see [Reconfigure Azure-SSIS IR](manage-azure-ssis-integration-runtime.md#to-reconfigure-an-azure-ssis-ir) and [Schedule Azure-SSIS IR](how-to-schedule-azure-ssis-integration-runtime.md).
-
-## Using mapping data flow debug for a normal workday
-
-As a Data Engineer, Sam is responsible for designing, building, and testing mapping data flows every day. Sam logs into the ADF UI in the morning and enables the Debug mode for Data Flows. The default TTL for Debug sessions is 60 minutes. Sam works throughout the day for 8 hours, so the Debug session never expires. Therefore, Sam's charges for the day will be:
-
-**8 (hours) x 8 (compute-optimized cores) x $0.193 = $12.35**
-
-At the same time, Chris, another Data Engineer, also logs into the ADF browser UI for data profiling and ETL design work. Chris does not work in ADF all day like Sam. Chris only needs to use the data flow debugger for 1 hour during the same period and same day as Sam above. These are the charges Chris incurs for debug usage:
-
-**1 (hour) x 8 (general purpose cores) x $0.274 = $2.19**
-
-## Transform data in blob store with mapping data flows
-
-In this scenario, you want to transform data in Blob Store visually in ADF mapping data flows on an hourly schedule.
-
-To accomplish the scenario, you need to create a pipeline with the following items:
-
-1. A Data Flow activity with the transformation logic.
-
-2. An input dataset for the data on Azure Storage.
-
-3. An output dataset for the data on Azure Storage.
-
-4. A schedule trigger to execute the pipeline every hour.
-
-| **Operations** | **Types and Units** |
-| | |
-| Create Linked Service | 2 Read/Write entity |
-| Create Datasets | 4 Read/Write entities (2 for dataset creation, 2 for linked service references) |
-| Create Pipeline | 3 Read/Write entities (1 for pipeline creation, 2 for dataset references) |
-| Get Pipeline | 1 Read/Write entity |
-| Run Pipeline | 2 Activity runs (1 for trigger run, 1 for activity runs) |
-| Data Flow Assumptions: execution time = 10 min + 10 min TTL | 10 \* 16 cores of General Compute with TTL of 10 |
-| Monitor Pipeline Assumption: Only 1 run occurred | 2 Monitoring run records retrieved (1 for pipeline run, 1 for activity run) |
-
-**Total Scenario pricing: $1.4631**
--- Data Factory Operations = **$0.0001**
- - Read/Write = 10\*0.00001 = $0.0001 [1 R/W = $0.50/50000 = 0.00001]
- - Monitoring = 2\*0.000005 = $0.00001 [1 Monitoring = $0.25/50000 = 0.000005]
-- Pipeline Orchestration &amp; Execution = **$1.463**
- - Activity Runs = 0.001\*2 = $0.002 [1 run = $1/1000 = 0.001]
- - Data Flow Activities = $1.461 prorated for 20 minutes (10 mins execution time + 10 mins TTL). $0.274/hour on Azure Integration Runtime with 16 cores general compute
-
-## Data integration in Azure Data Factory Managed VNET
-In this scenario, you want to delete original files on Azure Blob Storage and copy data from Azure SQL Database to Azure Blob Storage. You will do this execution twice on different pipelines. The execution time of these two pipelines is overlapping.
-To accomplish the scenario, you need to create two pipelines with the following items:
- - A pipeline activity ΓÇô Delete Activity.
- - A copy activity with an input dataset for the data to be copied from Azure Blob storage.
- - An output dataset for the data on Azure SQL Database.
- - A schedule triggers to execute the pipeline.
--
-| **Operations** | **Types and Units** |
-| | |
-| Create Linked Service | 4 Read/Write entity |
-| Create Datasets | 8 Read/Write entities (4 for dataset creation, 4 for linked service references) |
-| Create Pipeline | 6 Read/Write entities (2 for pipeline creation, 4 for dataset references) |
-| Get Pipeline | 2 Read/Write entity |
-| Run Pipeline | 6 Activity runs (2 for trigger run, 4 for activity runs) |
-| Execute Delete Activity: each execution time = 5 min. The Delete Activity execution in first pipeline is from 10:00 AM UTC to 10:05 AM UTC. The Delete Activity execution in second pipeline is from 10:02 AM UTC to 10:07 AM UTC.|Total 7 min pipeline activity execution in Managed VNET. Pipeline activity supports up to 50 concurrency in Managed VNET. There is a 60 minutes Time To Live (TTL) for pipeline activity|
-| Copy Data Assumption: each execution time = 10 min. The Copy execution in first pipeline is from 10:06 AM UTC to 10:15 AM UTC. The Copy Activity execution in second pipeline is from 10:08 AM UTC to 10:17 AM UTC. | 10 * 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
-| Monitor Pipeline Assumption: Only 2 runs occurred | 6 Monitoring run records retrieved (2 for pipeline run, 4 for activity run) |
--
-**Total Scenario pricing: $1.45523**
--- Data Factory Operations = $0.00023
- - Read/Write = 20*0.00001 = $0.0002 [1 R/W = $0.50/50000 = 0.00001]
- - Monitoring = 6*0.000005 = $0.00003 [1 Monitoring = $0.25/50000 = 0.000005]
-- Pipeline Orchestration & Execution = $1.455
- - Activity Runs = 0.001*6 = $0.006 [1 run = $1/1000 = 0.001]
- - Data Movement Activities = $0.333 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
- - Pipeline Activity = $1.116 (Prorated for 7 minutes of execution time plus 60 minutes TTL. $1/hour on Azure Integration Runtime)
-
-> [!NOTE]
-> These prices are for example purposes only.
-
-**FAQ**
-
-Q: If I would like to run more than 50 pipeline activities, can these activities be executed simultaneously?
-
-A: Max 50 concurrent pipeline activities will be allowed. The 51th pipeline activity will be queued until a ΓÇ£free slotΓÇ¥ is opened up.
-Same for external activity. Max 800 concurrent external activities will be allowed.
+- [Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
## Next steps- Now that you understand the pricing for Azure Data Factory, you can get started! - [Create a data factory by using the Azure Data Factory UI](quickstart-create-data-factory-portal.md)- - [Introduction to Azure Data Factory](introduction.md)- - [Visual authoring in Azure Data Factory](author-visually.md)
data-factory Pricing Examples Copy Transform Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-copy-transform-azure-databricks.md
+
+ Title: "Pricing example: Copy data and transform with Azure Databricks hourly"
+description: This article shows how to estimate pricing for Azure Data Factory to copy data and transform it with Azure Databricks every hour for 30 days.
++++++ Last updated : 09/22/2022++
+# Pricing example: Copy data and transform with Azure Databricks hourly
++
+In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform the data with Azure Databricks on an hourly schedule for 30 days.
+
+The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Configuration
+
+To accomplish the scenario, you need to create a pipeline with the following items:
+
+- One copy activity with an input dataset for the data to be copied from AWS S3, and an output dataset for the data on Azure storage.
+- One Azure Databricks activity for the data transformation.
+- One schedule trigger to execute the pipeline every hour. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
++
+## Costs estimation
+
+| **Operations** | **Types and Units** |
+| | |
+| Run Pipeline | 3 Activity runs per execution (1 for trigger run, 2 for activity runs) |
+| Copy Data Assumption: execution time per run = 10 min | 10 \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+| Execute Databricks activity Assumption: execution time per run = 10 min | 10 min External Pipeline Activity Execution |
+
+## Pricing calculator example
+
+**Total scenario pricing for 30 days: $122.03**
++
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples Copy Transform Dynamic Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-copy-transform-dynamic-parameters.md
+
+ Title: "Pricing example: Copy data and transform with dynamic parameters hourly"
+description: This article shows how to estimate pricing for Azure Data Factory to copy data and transform it with dynamic parameters every hour for 30 days.
++++++ Last updated : 09/22/2022++
+# Copy data and transform with dynamic parameters hourly
++
+In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform with Azure Databricks (with dynamic parameters in the script) on an hourly schedule.
+
+The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Configuration
+
+To accomplish the scenario, you need to create a pipeline with the following items:
+
+- One copy activity with an input dataset for the data to be copied from AWS S3, an output dataset for the data on Azure storage.
+- One Lookup activity for passing parameters dynamically to the transformation script.
+- One Azure Databricks activity for the data transformation.
+- One schedule trigger to execute the pipeline every hour. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
++
+## Costs estimation
+
+| **Operations** | **Types and Units** |
+| | |
+| Run Pipeline | 4 Activity runs per execution (1 for trigger run, 3 for activity runs) |
+| Copy Data Assumption: execution time per run = 10 min | 10 \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+| Execute Lookup activity Assumption: execution time per run = 1 min | 1 min Pipeline Activity execution |
+| Execute Databricks activity Assumption: execution time per run = 10 min | 10 min External Pipeline Activity execution |
+
+## Pricing example: Pricing calculator example
+
+**Total scenario pricing for 30 days: $122.09**
++
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples Data Integration Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-data-integration-managed-vnet.md
+
+ Title: "Pricing example: Data integration in Azure Data Factory Managed VNET"
+description: This article shows how to estimate pricing for Azure Data Factory to perform data integration using Managed VNET.
++++++ Last updated : 09/22/2022++
+# Pricing example: Data integration in Azure Data Factory Managed VNET
++
+In this scenario, you want to delete original files on Azure Blob Storage and copy data from Azure SQL Database to Azure Blob Storage on an hourly schedule. We'll calculate the price for 30 days. You'll do this execution twice on different pipelines for each run. The execution time of these two pipelines is overlapping.
+
+The prices used in this example below are hypothetical and aren't intended to imply exact actual pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Configuration
+
+To accomplish the scenario, you need to create two pipelines with the following items:
+ - A pipeline activity ΓÇô Delete Activity.
+ - A copy activity with an input dataset for the data to be copied from Azure Blob storage.
+ - An output dataset for the data on Azure SQL Database.
+ - A schedule trigger to execute the pipeline. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
+
+## Costs estimation
+
+| **Operations** | **Types and Units** |
+| | |
+| Run Pipeline | 6 Activity runs per execution (2 for trigger run, 4 for activity runs) |
+| Execute Delete Activity: each execution time = 5 min. If the Delete Activity execution in first pipeline is from 10:00 AM UTC to 10:05 AM UTC and the Delete Activity execution in second pipeline is from 10:02 AM UTC to 10:07 AM UTC.|Total 7 min pipeline activity execution in Managed VNET. Pipeline activity supports up to 50 concurrency in Managed VNET. There's a 60 minutes Time To Live (TTL) for pipeline activity|
+| Copy Data Assumption: each execution time = 10 min if the Copy execution in first pipeline is from 10:06 AM UTC to 10:15 AM UTC and the Copy Activity execution in second pipeline is from 10:08 AM UTC to 10:17 AM UTC. | 10 * 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+
+## Pricing calculator example
+
+**Total scenario pricing for 30 days: $129.02**
++
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples Get Delta Data From Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-get-delta-data-from-sap-ecc.md
+
+ Title: "Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows"
+description: This article shows how to price getting delta data from SAP ECC via SAP CDC in mapping data flows.
++++++ Last updated : 09/22/2022++
+# Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows
++
+In this scenario, you want to get delta changes from one table in SAP ECC via SAP CDC connector, do a few necessary transforms in flight, and then write data to Azure Data Lake Gen2 storage in ADF mapping dataflow daily.
+
+The prices used in this example below are hypothetical and aren't intended to imply exact actual pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Configuration
+
+To accomplish the scenario, you need to create a pipeline with the following items:
+
+- One Mapping Data Flow activity with an input dataset for the data to be loaded from SAP ECC, the transformation logic, and an output dataset for the data on Azure Data Lake Gen2 storage.
+- A Self-Hosted Integration Runtime referenced to SAP CDC connector.
+- A schedule trigger to execute the pipeline. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
+
+## Costs estimation
+
+In order to load data from SAP ECC via SAP CDC connector in Mapping Data Flow, you need to install your Self-Hosted Integration Runtime on an on-premises machine, or VM to directly connect to your SAP ECC system. Given that, you'll be charged on both Self-Hosted Integration Runtime with $0.10/hour and Mapping Data Flow with its vCore-hour price unit.
+
+Assuming every time it requires 15 minutes to complete the job, the cost estimations are as below.
+
+| **Operations** | **Types and Units** |
+| | |
+| Run Pipeline | 2 Activity runs per execution (1 for trigger run, 1 for activity run) |
+| Data Flow: execution time per run = 15 mins | 15 min * 8 cores of General Compute |
+| Self-Hosted Integration Runtime: execution time per run = 15 mins | 15 min * $0.10/hour (Data Movement Activity on Self-Hosted Integration Runtime Price) |
+
+## Pricing calculator example
+
+**Total scenario pricing for 30 days: $17.21**
++
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
data-factory Pricing Examples Mapping Data Flow Debug Workday https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-mapping-data-flow-debug-workday.md
+
+ Title: "Pricing example: Using mapping data flow debug for a normal workday"
+description: This article shows how to estimate pricing for Azure Data Factory to use mapping data flow debug for a normal workday.
++++++ Last updated : 09/22/2022++
+# Pricing example: Using mapping data flow debug for a normal workday
++
+This example shows mapping data flow debug costs for a typical workday for a data engineer.
+
+The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Azure Data Factory engineer
+
+A data factory engineer is responsible for designing, building, and testing mapping data flows every day. The engineer logs into the ADF UI in the morning and enables the Debug mode for Data Flows. The default TTL for Debug sessions is 60 minutes. The engineer works throughout the day for 8 hours, so the Debug session never expires. Therefore, Sam's charges for the day will be:
+
+**8 (hours) x 8 (compute-optimized cores) x $0.193 = $12.35**
+
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples S3 To Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-s3-to-blob.md
+
+ Title: "Pricing example: Copy data from AWS S3 to Azure Blob storage hourly"
+description: This article shows how to estimate pricing for Azure Data Factory to copy data from AWS S3 to Azure Blob storage every hour for 30 days.
++++++ Last updated : 09/22/2022++
+# Pricing example: Copy data from AWS S3 to Azure Blob storage hourly
++
+In this scenario, you want to copy data from AWS S3 to Azure Blob storage on an hourly schedule for 8 hours per day, for 30 days.
+
+The prices used in this example below are hypothetical and aren't intended to imply exact actual pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Configuration
+
+To accomplish the scenario, you need to create a pipeline with the following items:
+
+- I'll copy data from AWS S3 to Azure Blob storage, and this will move 10 GB of data from S3 to blob storage. I estimate it will run for 2-3 hours, and I plan to set DIU as Auto.
+- A schedule trigger to execute the pipeline every hour for 8 hours every day. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
+
+ :::image type="content" source="media/pricing-concepts/scenario1.png" alt-text="Diagram shows a pipeline with a schedule trigger.":::
+
+## Costs estimation
+
+| **Operations** | **Types and Units** |
+| | |
+| Run Pipeline | 2 Activity runs per execution (1 for the trigger to run, 1 for activity to run) |
+| Copy Data Assumption: execution hours **per run** | 0.5 hours \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+| Total execution hours: 8 runs per day for 30 days | 240 runs * 2 DIU/run = 480 DIUs |
+
+## Pricing calculator example
+
+**Total scenario pricing for 30 days: $122.00**
++
+## Next steps
+
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples Ssis On Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-ssis-on-azure-ssis-integration-runtime.md
+
+ Title: "Pricing example: Run SSIS packages on Azure-SSIS integration runtime"
+description: This article shows how to estimate pricing for Azure Data Factory to run SSIS packages with the Azure-SSIS integration runtime.
++++++ Last updated : 09/22/2022++
+# Pricing example: Run SSIS packages on Azure-SSIS integration runtime
++
+In this article you will see how to estimate costs to use Azure Data Factory to run SSIS packages with the Azure-SSIS integration runtime.
+
+The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Pricing model for Azure-SSIS integration runtime
+
+The Azure-SSIS integration runtime (IR) is a specialized cluster of Azure virtual machines (VMs) for SSIS package executions in Azure Data Factory (ADF). When you provision it, it will be dedicated to you, hence it will be charged just like any other dedicated Azure VMs as long as you keep it running, regardless whether you use it to execute SSIS packages or not. With respect to its running cost, youΓÇÖll see the hourly estimate on its setup pane in ADF portal, for example:
++
+### Azure Hybrid Benefit (AHB)
+
+Azure Hybrid Benefit (AHB) can reduce the cost of your Azure-SSIS integration runtime (IR). Using the AHB, you can provide your own SQL license, which reduces the cost of the Azure-SSIS IR from $1.938/hour to $1.158/hour. To learn more about AHB, visit the [Azure Hybrid Benefit (AHB)](https://azure.microsoft.com/pricing/hybrid-benefit/) article.
++
+## Cost Estimation
+
+In the above example, if you keep your Azure-SSIS IR running for 2 hours, using AHB to bring your own SQL license, you'll be charged: **2 (hours) x US$1.158/hour = US$2.316**.
+
+To manage your Azure-SSIS IR running cost, you can scale down your VM size, scale in your cluster size, bring your own SQL Server license via Azure Hybrid Benefit (AHB) option that offers significant savings, see [Azure-SSIS IR pricing](https://azure.microsoft.com/pricing/details/data-factory/ssis/), and or start & stop your Azure-SSIS IR whenever convenient/on demand/just in time to process your SSIS workloads, see [Reconfigure Azure-SSIS IR](manage-azure-ssis-integration-runtime.md#to-reconfigure-an-azure-ssis-ir) and [Schedule Azure-SSIS IR](how-to-schedule-azure-ssis-integration-runtime.md).
+
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples Transform Mapping Data Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-transform-mapping-data-flows.md
+
+ Title: "Pricing example: Transform data in blob store with mapping data flows"
+description: This article shows how to estimate pricing for Azure Data Factory to transform data in a blob store with mapping data flows.
++++++ Last updated : 09/22/2022++
+# Pricing example: Transform data in blob store with mapping data flows
++
+In this scenario, you want to transform data in Blob Store visually in ADF mapping data flows on an hourly schedule for 30 days.
+
+The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Configuration
+
+To accomplish the scenario, you need to create a pipeline with the following items:
+
+- A Data Flow activity with the transformation logic.
+- An input dataset for the data on Azure Storage.
+- An output dataset for the data on Azure Storage.
+- A schedule trigger to execute the pipeline every hour. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
+
+## Costs estimation
+
+| **Operations** | **Types and Units** |
+| | |
+| Run Pipeline | 2 Activity runs per execution (1 for trigger run, 1 for activity runs) |
+| Data Flow Assumptions: execution time per run = 10 min + 10 min TTL | 10 \* 16 cores of General Compute with TTL of 10 |
+
+## Pricing calculator example
+
+**Total scenario pricing for 30 days: $1051.28**
+++
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Tumbling Window Trigger Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tumbling-window-trigger-dependency.md
Previously updated : 09/22/2022 Last updated : 09/27/2022 # Create a tumbling window trigger dependency
You can see the status of the dependencies, and windows for each dependent trigg
A tumbling window trigger will wait on dependencies for _seven days_ before timing out. After seven days, the trigger run will fail.
+> [!NOTE]
+> A tumbling window trigger cannot be cancelled while it is in the **Waiting on dependency** state. The dependent activity must finish before the tumbling window trigger can be cancelled. This is by design to ensure dependent activities can complete once started, and helps reduce the likelihood of unexpected results.
+ For a more visual to view the trigger dependency schedule, select the Gantt view. :::image type="content" source="media/tumbling-window-trigger-dependency/tumbling-window-dependency-09.png" alt-text="Monitor dependencies gantt chart":::
defender-for-iot Architecture Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture-connections.md
Title: OT sensor cloud connection methods - Microsoft Defender for IoT description: Learn about the architecture models available for connecting your sensors to Microsoft Defender for IoT. Previously updated : 03/08/2022 Last updated : 09/11/2022 # OT sensor cloud connection methods This article describes the architectures and methods supported for connecting your Microsoft Defender for IoT OT sensors to the cloud.
-All supported cloud connection methods provide:
+The cloud connection methods described in this article are supported only for OT sensor version 22.x and later. All methods provide:
- **Simple deployment**, requiring no extra installations in your private Azure environment, such as for an IoT Hub
With direct connections
For more information, see [Connect directly](connect-sensors.md#connect-directly).
-## Multi-cloud connections
+## Multicloud connections
You can connect your sensors to the Defender for IoT portal in Azure from other public clouds for OT/IoT management process monitoring.
Depending on your environment configuration, you might connect using one of the
- A site-to-site VPN over the internet.
-For more information, see [Connect via multi-cloud vendors](connect-sensors.md#connect-via-multi-cloud-vendors).
+For more information, see [Connect via multicloud vendors](connect-sensors.md#connect-via-multicloud-vendors).
## Working with a mixture of sensor software versions
defender-for-iot Sample Connectivity Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/sample-connectivity-models.md
This article provides sample network models for Microsoft Defender for IoT senso
The following diagram shows an example of a ring network topology, in which each switch or node connects to exactly two other switches, forming a single continuous pathway for the traffic. ## Sample: Linear bus and star topology In a star network, every host is connected to a central hub. In its simplest form, one central hub acts as a conduit to transmit messages. In the following example, lower switches aren't monitored, and traffic that remains local to these switches won't be seen. Devices might be identified based on ARP messages, but connection information will be missing. ## Sample: Multi-layer, multi-tenant network
The following diagram is a general abstraction of a multilayer, multitenant netw
Typically, NTA sensors are deployed in layers 0 to 3 of the OSI model. ## Next steps
defender-for-iot Connect Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/connect-sensors.md
Title: Connect OT sensors to Microsoft Defender for IoT in the cloud description: Learn how to connect your Microsoft Defender for IoT OT sensors to the cloud Previously updated : 06/02/2022 Last updated : 09/11/2022 # Connect your OT sensors to the cloud
-This article describes how to connect your sensors to the Defender for IoT portal in Azure.
+This article describes how to connect your OT network sensors to the Defender for IoT portal in Azure, for OT sensor software versions 22.x and later.
For more information about each connection method, see [Sensor connection methods](architecture-connections.md).
+## Prerequisites
+
+To use the connection methods described in this article, you must have an OT network sensor with software version 22.x or later.
+
+For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
## Choose a sensor connection method
Use this section to help determine which connection method is right for your org
|- You require private connectivity between your sensor and Azure, <br>- Your site is connected to Azure via ExpressRoute, or <br>- Your site is connected to Azure over a VPN | **[Connect via an Azure proxy](#connect-via-an-azure-proxy)** | |- Your sensor needs a proxy to reach from the OT network to the cloud, or <br>- You want multiple sensors to connect to Azure through a single point | **[Connect via proxy chaining](#connect-via-proxy-chaining)** | |- You want to connect your sensor to Azure directly | **[Connect directly](#connect-directly)** |
-|- You have sensors hosted in multiple public clouds | **[Connect via multi-cloud vendors](#connect-via-multi-cloud-vendors)** |
+|- You have sensors hosted in multiple public clouds | **[Connect via multicloud vendors](#connect-via-multicloud-vendors)** |
## Connect via an Azure proxy
Before you start, make sure that you have:
- A proxy server resource, with firewall permissions to access Microsoft cloud services. The procedure described in this article uses a Squid server hosted in Azure. -- Outbound HTTPS traffic on port 443 to the following hostnames:-
- - **IoT Hub**: `*.azure-devices.net`
- - **Blob storage**: `*.blob.core.windows.net`
- - **EventHub**: `*.servicebus.windows.net`
- - **Microsoft Download Center**: `download.microsoft.com`
+- Outbound HTTPS traffic on port 443 enabled to the required endpoints for Defender for IoT. Download the list of required endpoints from the **Sites and sensors** page: Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**.
> [!IMPORTANT] > Microsoft Defender for IoT does not offer support for Squid or any other proxy services. It is the customer's responsibility to set up and maintain the proxy service.
This procedure describes how to install and configure a connection between your
sudo systemctl enable squid ```
-1. Connect your proxy to Defender for IoT. Enable outbound HTTP traffic on port 443 from the sensor to the following Azure hostnames:
+1. Connect your proxy to Defender for IoT:
+
+ 1. Download the list of required endpoints from the **Sites and sensors** page: Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**.
+ 1. Enable outbound HTTPS traffic on port 443 from the sensor to each of the required endpoints for Defender for IoT.
- - **IoT Hub**: `*.azure-devices.net`
- - **Threat Intelligence**: `*.blob.core.windows.net`
- - **Eventhub**: `*.servicebus.windows.net`
- - **Microsoft download site**: `download.microsoft.com`
> [!IMPORTANT] > Some organizations must define firewall rules by IP addresses. If this is true for your organization, it's important to know that the Azure public IP ranges are updated weekly.
This procedure describes how to install and configure a connection between your
This section describes what you need to configure a direct sensor connection to Defender for IoT in Azure. For more information, see [Direct connections](architecture-connections.md#direct-connections).
-1. Ensure that your sensor can access the cloud using HTTP on port 443 to the following Microsoft domains:
+1. Download the list of required endpoints from the **Sites and sensors** page on the Azure portal. Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**.
- - **IoT Hub**: `*.azure-devices.net`
- - **Threat Intelligence**: `*.blob.core.windows.net`
- - **Eventhub**: `*.servicebus.windows.net`
- - **Microsoft Download Center**: `download.microsoft.com`
+1. Ensure that your sensor can access the cloud using HTTPS on port 443 to each of the listed endpoints in the downloaded list.
1. Azure public IP addresses are updated weekly. If you must define firewall rules based on IP addresses, make sure to download the new JSON file each week and make the required changes on your site to correctly identify services running in Azure. You'll need the updated IP ranges for **AzureIoTHub**, **Storage**, and **EventHub**. See the [latest IP ranges](https://www.microsoft.com/en-us/download/details.aspx?id=56519).
-## Connect via multi-cloud vendors
+## Connect via multicloud vendors
-This section describes how to connect your sensor to Defender for IoT in Azure from sensors deployed in one or more public clouds. For more information, see [Multi-cloud connections](architecture-connections.md#multi-cloud-connections).
+This section describes how to connect your sensor to Defender for IoT in Azure from sensors deployed in one or more public clouds. For more information, see [Multicloud connections](architecture-connections.md#multicloud-connections).
### Prerequisites
Before you start:
- Make sure that you have a sensor deployed in a public cloud, such as AWS or Google Cloud, and configured to monitor SPAN traffic. -- Choose the multi-cloud connectivity method that's right for your organization:
+- Choose the multicloud connectivity method that's right for your organization:
Use the following flow chart to determine which connectivity method to use:
- :::image type="content" source="media/architecture-connections/multi-cloud-flow-chart.png" alt-text="Flow chart to determine which connectivity method to use.":::
+ :::image type="content" source="media/architecture-connections/multicloud-flow-chart.png" alt-text="Flow chart to determine which connectivity method to use.":::
- **Use public IP addresses over the internet** if you don't need to exchange data using private IP addresses
If you're an existing customer with a production deployment and sensors connecte
- Check the active resources in your account and make sure there are no other services connected to your IoT Hub.
- - If you're running a hybrid environment with multiple sensor versions, make sure any sensors with software version 22.1.x can connect to Azure. Use firewall rules that allow outbound HTTPS traffic on port 443 to the following hostnames:
+ - If you're running a hybrid environment with multiple sensor versions, make sure any sensors with software version 22.1.x can connect to Azure. Use firewall rules that allow outbound HTTPS traffic on port 443 to each of the required endpoints.
- - **IoT Hub**: `*.azure-devices.net`
- - **Threat Intelligence**: `*.blob.core.windows.net`
- - **EventHub**: `*.servicebus.windows.net`
- - **Microsoft Download Center**: `download.microsoft.com`
+ Find the list of required endpoints for Defender for IoT from the **Sites and sensors** page on the Azure portal. Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**.
While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versioning-and-support-for-on-premises-software-versions), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with the connection methods described in this article.
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Send mail that includes the alert information. You can enter one email address p
1. Select **Save**.
+>[!NOTE]
+>Make sure you also add an SMTP server to System Settings -> Integrations -> SMTP Server in order for the EMAIL forwarding rule to function
+ ### Syslog server actions The following formats are supported:
defender-for-iot How To Manage Device Inventory For Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md
This procedure describes how to view detected devices in the **Device inventory*
|**Modify columns shown** | Select **Edit columns** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/edit-columns-icon.png" border="false":::. In the **Edit columns** pane:<br><br> - Select the **+ Add Column** button to add new columns to the grid.<br> - Drag and drop fields to change the columns order.<br>- To remove a column, select the **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/trashcan-icon.png" border="false"::: icon to the right.<br>- To reset the columns to their default settings, select **Reset** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/reset-icon.png" border="false":::. <br><br>Select **Save** to save any changes made. | | **Group devices** | From the **Group by** above the gird, select either **Type** or **Class** to group the devices shown. Inside each group, devices retain the same column sorting. To remove the grouping, select **No grouping**. |
+ For more information, see [Device inventory column reference](#device-inventory-column-reference).
+ 1. Select a device row to view more details about that device. Initial details are shown in a pane on the right, where you can also select **View full details** to drill down more. For example: :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/device-information-window.png" alt-text="Screenshot of a device details pane and the View full details button in the Azure portal." lightbox="media/how-to-manage-device-inventory-on-the-cloud/device-information-window.png":::
-For more information, see [Device inventory column reference](#device-inventory-column-reference).
### Identify devices that aren't connecting successfully
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
You'll receive an error message if the activation file couldn't be uploaded. The
- **For locally connected sensors**: The activation file isn't valid. If the file isn't valid, go to [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). On the **Sensor Management** page, select the sensor with the invalid file, and download a new activation file. -- **For cloud-connected sensors**: The sensor can't connect to the internet. Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that \*.azure-devices.net:443 is allowed in the firewall and/or proxy. If wildcards are not supported or you want more control, the FQDN for your specific endpoint (either a sensor, or for legacy connections, an IoT hub) should be opened in your firewall and/or proxy. For more information, see [Reference - IoT Hub endpoints](../../iot-hub/iot-hub-devguide-endpoints.md).
+- **For cloud-connected sensors**: The sensor can't connect to the internet. Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that the required endpoints are allowed in the firewall and/or proxy.
+
+ For OT sensors version 22.x, download the list of required endpoints from the **Sites and sensors** page on the Azure portal. Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**. For sensors with earlier versions, see [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).
- **For cloud-connected sensors**: The activation file is valid but Defender for IoT rejected it. If you can't resolve this problem, you can download another activation from the **Sites and Sensors** page in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). If this doesn't work, contact Microsoft Support.
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
Title: Manage sensors with Defender for IoT in the Azure portal description: Learn how to onboard, view, and manage sensors with Defender for IoT in the Azure portal. Previously updated : 08/08/2022 Last updated : 09/08/2022
This article describes how to view and manage sensors with [Defender for IoT in
This procedure describes how to use the Azure portal to contact vendors for pre-configured appliances, or how to download software for you to install on your own appliances. 1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **Sensor**.
-
+ 1. Do one of the following: - To buy a pre-configured appliance, select **Contact** under **Buy preconfigured appliance**. This opens an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com) with a template request for Defender for IoT appliances. For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md).
Use the options on the **Sites and sensor** page and a sensor details page to do
|:::image type="icon" source="medi#install-the-sensor-software). | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit automatic threat intelligence updates** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. <br><br>Select **Edit** and then toggle the **Automatic Threat Intelligence Updates (Preview)** option on or off as needed. Select **Submit** to save your changes. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-delete.png" border="false"::: **Delete a sensor** | For individual sensors only, from the **...** options menu or a sensor details page. |
+| :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-diagnostics.png" border="false"::: **Send diagnostic files to support** | Individual, locally managed OT sensors only. <br><br>Available from the **...** options menu. <br><br>For more information, see [Upload a diagnostics log for support (Public preview)](#upload-a-diagnostics-log-for-support-public-preview).|
| **Download SNMP MIB file** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Set up SNMP MIB monitoring](how-to-set-up-snmp-mib-monitoring.md).| | **Recover an on-premises management console password** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). |
-| :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-diagnostics.png" border="false"::: **Send diagnostic files to support** | Individual, locally managed OT sensors only. <br><br>Available from the **...** options menu. <br><br>For more information, see [Upload a diagnostics log for support (Public preview)](#upload-a-diagnostics-log-for-support-public-preview).|
+| **Download endpoint details** (Public preview) | Available from the **Sites and sensors** toolbar **More actions** menu, for OT sensor versions 22.x only. <br><br>Download the list of endpoints that must be enabled as secure endpoints from OT network sensors. Make sure that HTTPS traffic is enabled over port 443 to the listed endpoints for your sensor to connect to Azure. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.<br><br>To enable this option, select a sensor with a supported software version, or a site with one or more sensors with supported versions. |
## Reactivate an OT sensor
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
Use the following tables to ensure that required firewalls are open on your work
| Protocol | Transport | In/Out | Port | Purpose | Source | Destination | |--|--|--|--|--|--|--|
-| HTTPS | TCP | Out | 443 | Access to Azure | Sensor | `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net`|
-| HTTPS | TCP | Out | 443 | Remote sensor upgrades from the Azure portal | Sensor| `download.microsoft.com`|
+| HTTPS | TCP | Out | 443 | Access to Azure | Sensor |**For OT sensor versions 22.x**: Download the list from the **Sites and sensors** page in the Azure portal. Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More options > Download endpoint details**. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).<br><br>**For OT sensor versions 10.x**: `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net`|
+| HTTPS | TCP | Out | 443 | Remote sensor updates from the Azure portal | Sensor| `download.microsoft.com`|
+ ### Sensor access to the on-premises management console
defender-for-iot Integrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md
Integrate Microsoft Defender for Iot with partner services to view partner data
|Name |Description |Support scope |Supported by |Learn more | ||||||
-|**Defender for IoT data connector** | Displays Defender for IoT data in Microsoft Sentinel, supporting end-to-end SOC investigations for Defender for IoT alerts. | - OT and Enterprise IoT networks <br>- Cloud-connected sensors | Microsoft | [Integrate Microsoft Sentinel and Microsoft Defender for IoT](/azure/sentinel/iot-solution?tabs=use-out-of-the-box-analytics-rules-recommended) |
+|**Defender for IoT data connector in Sentinel** | Displays Defender for IoT data in Microsoft Sentinel, supporting end-to-end SOC investigations for Defender for IoT alerts. | - OT and Enterprise IoT networks <br>- Cloud-connected sensors | Microsoft | [Integrate Microsoft Sentinel and Microsoft Defender for IoT](/azure/sentinel/iot-solution?tabs=use-out-of-the-box-analytics-rules-recommended) |
+|**Sentinel** | Send Defender for IoT alerts to Sentinel. | - OT networks <br>- Locally managed sensors and on-premises management consoles | Microsoft | |
## Palo Alto
defender-for-iot Integrate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-with-active-directory.md
You can associate Active Directory groups defined here with specific permission
| Domain controller port | Define the port on which your LDAP is configured. | | Primary domain | Set the domain name (for example, `subdomain.domain.com`) and the connection type according to your LDAP configuration. | | Active Directory groups | Enter the group names that are defined in your Active Directory configuration on the LDAP server. You can enter a group name that you'll associate with Admin, Security Analyst and Read-only permission levels. Use these groups when creating new sensor users.|
- | Trusted domains | To add a trusted domain, add the domain name and the connection type of a trusted domain. <br />You can configure trusted domains only for users who were defined under users. |
+ | Trusted endpoints | To add a trusted domain, add the domain name and the connection type of a trusted domain. <br />You can configure trusted endpoints only for users who were defined under users. |
### Active Directory groups for the on-premises management console
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
You can [order](mailto:hardware.sales@arrow.com) any of the following preconfigu
||||| |**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) | |**E1800** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
-|**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
+|**E1800** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
|**L500** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 | |**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
|Service area |Updates | |||
-|**OT networks** |**Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm<br><br>- **Microsoft Sentinel integration**: <br>- [Investigation enhancements with IOT device entities](#investigation-enhancements-with-iot-device-entities-in-microsoft-sentinel)<br>- [Updates to the Microsoft Defender for IoT solution](#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub) |
+|**OT networks** |**All supported OT sensor software versions**: <br>- [Device vulnerabilities from the Azure portal](#device-vulnerabilities-from-the-azure-portal-public-preview)<br>- [Security recommendations for OT networks](#security-recommendations-for-ot-networks-public-preview)<br><br> **All OT sensor software versions 22.x**: [Updates for Azure cloud connection firewall rules](#updates-for-azure-cloud-connection-firewall-rules-public-preview) <br><br>**Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm<br><br>**Microsoft Sentinel integration**: <br>- [Investigation enhancements with IoT device entities](#investigation-enhancements-with-iot-device-entities-in-microsoft-sentinel)<br>- [Updates to the Microsoft Defender for IoT solution](#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub)|
-### Investigation enhancements with IOT device entities in Microsoft Sentinel
+### Security recommendations for OT networks (Public preview)
+
+Defender for IoT now provides security recommendations to help customers manage their OT/IoT network security posture. Defender for IoT recommendations help users form actionable, prioritized mitigation plans that address the unique challenges of OT/IoT networks. Use recommendations for lower your network's risk and attack surface.
+
+You can see the following security recommendations from the Azure portal for detected devices across your networks:
+
+- **Review PLC operating mode**. Devices with this recommendation are found with PLCs set to unsecure operating mode states. We recommend setting PLC operating modes to the **Secure Run** state if access is no longer required to the PLC to reduce the threat of malicious PLC programming.
+
+- **Review unauthorized devices**. Devices with this recommendation must be identified and authorized as part of the network baseline. We recommend taking action to identify any indicated devices. Disconnect any devices from your network that remain unknown even after investigation to reduce the threat of rogue or potentially malicious devices.
+
+Access security recommendations from one of the following locations:
+
+- The **Recommendations** page, which displays all current recommendations across all detected OT devices.
+
+- The **Recommendations** tab on a device details page, which displays all current recommendations for the selected device.
+
+From either location, select a recommendation to drill down further and view lists of all detected OT devices that are currently in a *healthy* or *unhealthy* state, according to the selected recommendation. From the **Unhealthy devices** or **Healthy devices** tab, select a device link to jump to the selected device details page. For example:
++
+For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
+
+### Device vulnerabilities from the Azure portal (Public preview)
+
+Defender for IoT now provides vulnerability data in the Azure portal for detected OT network devices. Vulnerability data is based on the repository of standards based vulnerability data documented at the [US government National Vulnerability Database (NVD)](https://www.nist.gov/programs-projects/national-vulnerability-database-nvd).
+
+Access vulnerability data in the Azure portal from the following locations:
+
+- On a device details page select the **Vulnerabilities** tab to view current vulnerabilities on the selected device. For example, from the **Device inventory** page, select a specific device and then select **Vulnerabilities**.
+
+ For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
+
+- A new **Vulnerabilities** workbook displays vulnerability data across all monitored OT devices. Use the **Vulnerabilities** workbook to view data like CVE by severity or vendor, and full lists of detected vulnerabilities and vulnerable devices and components.
+
+ Select an item in the **Device vulnerabilities**, **Vulnerable devices**, or **Vulnerable components** tables to view related information in the tables on the right.
+
+ For example:
+
+ :::image type="content" source="media/release-notes/vulnerabilities-workbook.png" alt-text="Screenshot of a Vulnerabilities workbook in Defender for IoT.":::
+
+ For more information, see [Use Azure Monitor workbooks in Microsoft Defender for IoT](workbooks.md).
+
+### Updates for Azure cloud connection firewall rules (Public preview)
+
+OT network sensors connect to Azure to provide alert and device data and sensor health messages, access threat intelligence packages, and more. Connected Azure services include IoT Hub, Blob Storage, Event Hubs, and the Microsoft Download Center.
+
+For OT sensors with software versions 22.x and higher, Defender for IoT now supports increased security when adding outbound allow rules for connections to Azure. Now you can define your outbound allow rules to connect to Azure without using wildcards.
+
+When defining outbound allow rules to connect to Azure, you'll need to enable HTTPS traffic to each of the required endpoints on port 443. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.
+
+For supported sensor versions, download the full list of required secure endpoints from the following locations in the Azure portal:
+
+- **A successful sensor registration page**: After onboarding a new OT sensor, version 22.x, the successful registration page now provides instructions for next steps, including a link to the endpoints you'll need to add as secure outbound allow rules on your network. Select the **Download endpoint details** link to download the JSON file.
+
+ For example:
+
+ :::image type="content" source="media/release-notes/download-endpoints.png" alt-text="Screenshot of a successful OT sensor registration page with the download endpoints link.":::
+
+- **The Sites and sensors page**: Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More actions** > **Download endpoint details** to download the JSON file. For example:
+
+ :::image type="content" source="media/release-notes/download-endpoints-sites-sensors.png" alt-text="Screenshot of the Sites and sensors page with the download endpoint details link.":::
+
+For more information, see:
+
+- [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md)
+- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)
+- [Networking requirements](how-to-set-up-your-network.md#sensor-access-to-azure-portal)
+
+### Investigation enhancements with IoT device entities in Microsoft Sentinel
Defender for IoT's integration with Microsoft Sentinel now supports an IoT device entity page. When investigating incidents and monitoring IoT security in Microsoft Sentinel, you can now identify your most sensitive devices and jump directly to more details on each device entity page.
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
This procedure describes how to prepare your physical appliance or VM to install
| DNS | TCP/UDP | In/Out | 53 | Address resolution |
-1. Make sure that your physical appliance or VM can access the cloud using HTTP on port 443 to the following Microsoft domains:
+1. Make sure that your physical appliance or VM can access the cloud using HTTPS on port 443 to the following Microsoft endpoints:
- **EventHub**: `*.servicebus.windows.net` - **Storage**: `*.blob.core.windows.net`
This procedure describes how to prepare your physical appliance or VM to install
- **IoT Hub**: `*.azure-devices.net` > [!TIP]
- > You can also download and add the [Azure public IP ranges](https://www.microsoft.com/download/details.aspx?id=56519) so your firewall will allow the Azure domains that are specified above, along with their region.
+ > You can also download and add the [Azure public IP ranges](https://www.microsoft.com/download/details.aspx?id=56519) so your firewall will allow the Azure endpoints that are specified above, along with their region.
> > The Azure public IP ranges are updated weekly. New ranges appearing in the file will not be used in Azure for at least one week. To use this option, download the new json file every week and perform the necessary changes at your site to correctly identify services running in Azure.
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Last updated 07/11/2022
# Tutorial: Get started with Microsoft Defender for IoT for OT security
-This tutorial describes how to set up your network for OT system security monitoring, using a virtual, cloud-connected sensor, on a virtual machine (VM), using a trial subscription of Microsoft Defender for IoT.
+This tutorial describes how to set up your network for OT system security monitoring, using a virtual, cloud-connected sensor, on a virtual machine (VM), using a trial subscription of Microsoft Defender for IoT.
> [!NOTE] > If you're looking to set up security monitoring for enterprise IoT systems, see [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md) instead.
This procedure describes how to configure a SPAN port using a workaround with VM
1. Connect to the sensor, and verify that mirroring works.
-## Verify cloud connections
-
-This tutorial describes how to create a cloud-connected sensor, connecting directly to the Defender for IoT on the cloud.
-
-Before continuing, make sure that your sensor can access the cloud using HTTP on port 443 to the following Microsoft domains:
--- **IoT Hub**: `*.azure-devices.net`-- **Blob Storage**: `*.blob.core.windows.net`-- **Eventhub**: `*.servicebus.windows.net`-- **Microsoft Download Center**: `download.microsoft.com`-
-> [!TIP]
-> Defender for IoT supports other cloud-connection methods, including proxies or multi-cloud vendors. For more information, see [OT sensor cloud connection methods](architecture-connections.md), [Connect your OT sensors to the cloud](connect-sensors.md), [Cloud-connected vs local sensors](architecture.md#cloud-connected-vs-local-sensors).
->
- ## Onboard and activate the virtual sensor Before you can start using your Defender for IoT sensor, you'll need to onboard your new virtual sensor to your Azure subscription, and download the virtual sensor's activation file to activate the sensor.
Before you can start using your Defender for IoT sensor, you'll need to onboard
[!INCLUDE [root-of-trust](includes/root-of-trust.md)] - 1. Save the downloaded activation file in a location that will be accessible to the user signing into the console for the first time.
+ You can also download the file manually by selecting the relevant link in the **Activate your sensor** box. You'll use this file to activate your sensor, as described [below](#activate-your-sensor).
+
+1. Make sure that your new sensor will be able to successfully connect to Azure. In the **Add outbound allow rules** box, select the **Download endpoint details** link to download a JSON list of the endpoints you must configure as secure endpoints from your sensor. For example:
+
+ :::image type="content" source="media/release-notes/download-endpoints.png" alt-text="Screenshot of the **Add outbound allow rules** box.":::
+
+ To ensure that your sensor can connect to Azure, configure the listed endpoints as allowed outbound HTTPS traffic over port 443. You'll need to configure these outbound allow rules once for all OT sensors onboarded to the same subscription
+
+ > [!TIP]
+ > You can also access the list of required endpoints from the **Sites and sensors** page. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
+ 1. At the bottom left of the page, select **Finish**. You can now see your new sensor listed on the Defender for IoT **Sites and sensors** page. For more information, see [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
When downloading your update files from the Azure portal, youΓÇÖll see the optio
Make sure to select the file that matches your upgrade scenario.
-Updates from legacy versions may require a series of software updates. For example, if you still have a sensor version 3.1.1 installed, you'll need to first upgrade to version 10.5.5, and then to a 22.x version.
+Updates from legacy versions may require a series of software updates: If you still have a sensor version 3.1.1 installed, you'll need to first upgrade to version 10.5.5, and then to a 22.x version. For example:
:::image type="content" source="media/update-ot-software/legacy.png" alt-text="Screenshot of the multiple download options displayed.":::
Updates from legacy versions may require a series of software updates. For examp
For more information, see [OT sensor cloud connection methods](architecture-connections.md) and [Connect your OT sensors to the cloud](connect-sensors.md). -- Make sure that your firewall rules are configured as needed for the new version you're updating to. For example, the new version may require a new or modified firewall rule to support sensor access to the Azure portal. From the **Sites and sensors** page, select **More actions > Download sensor endpoint details** for the full list of domains required to access the Azure portal.
+- Make sure that your firewall rules are configured as needed for the new version you're updating to. For example, the new version may require a new or modified firewall rule to support sensor access to the Azure portal. From the **Sites and sensors** page, select **More actions > Download sensor endpoint details** for the full list of endpoints required to access the Azure portal.
For more information, see [Networking requirements](how-to-set-up-your-network.md#networking-requirements) and [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
This procedure describes how to manually download the new sensor software versio
1. On your sensor console, select **System Settings** > **Sensor management** > **Software Update**.
-1. On the **Software Update** pane on the right, select **Upload file**, and then navigate to and select your downloaded `legacy-sensor-secured-patcher-<Version number>.tar` file.
+1. On the **Software Update** pane on the right, select **Upload file**, and then navigate to and select your downloaded `legacy-sensor-secured-patcher-<Version number>.tar` file. For example:
:::image type="content" source="media/how-to-manage-individual-sensors/upgrade-pane-v2.png" alt-text="Screenshot of the Software Update pane on the sensor." lightbox="media/how-to-manage-individual-sensors/upgrade-pane-v2.png"::: The update process starts, and may take about 30 minutes. During your upgrade, the system is rebooted twice.
- Sign in when prompted, and then return to the **System Settings** > **Sensor management** > **Software Update** pane to confirm that the new version is listed.
+ Sign in when prompted, and then return to the **System Settings** > **Sensor management** > **Software Update** pane to confirm that the new version is listed. For example:
:::image type="content" source="media/how-to-manage-individual-sensors/defender-for-iot-version.png" alt-text="Screenshot of the upgrade version that appears after you sign in." lightbox="media/how-to-manage-individual-sensors/defender-for-iot-version.png":::
The sensor update process won't succeed if you don't update the on-premises mana
**To update several sensors**:
-1. On the Azure portal, go to **Defender for IoT** > **Updates**. Under **Sensors**, select **Download** and save the file.
+1. On the Azure portal, go to **Defender for IoT** > **Updates**. Under **Sensors**, select **Download** and save the file. For example:
:::image type="content" source="media/how-to-manage-individual-sensors/updates-page.png" alt-text="Screenshot of the Updates page of Defender for IoT." lightbox="media/how-to-manage-individual-sensors/updates-page.png":::
The sensor update process won't succeed if you don't update the on-premises mana
Also make sure that sensors you *don't* want to update are *not* selected.
- Save your changes when you're finished selecting sensors to update.
-
+ Save your changes when you're finished selecting sensors to update. For example:
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/automatic-updates.png" alt-text="Screenshot of on-premises management console with Automatic Version Updates selected." lightbox="media/how-to-manage-sensors-from-the-on-premises-management-console/automatic-updates.png":::
This procedure is relevant only if you're updating sensors from software version
1. Select the site where you want to update your sensor, and then browse to the sensor you want to update.
-1. Expand the row for your sensor, select the options **...** menu on the right of the row, and then select **Prepare to update to 22.x**.
+1. Expand the row for your sensor, select the options **...** menu on the right of the row, and then select **Prepare to update to 22.x**. For example:
:::image type="content" source="media/how-to-manage-sensors-on-the-cloud/prepare-to-update.png" alt-text="Screenshot of the Prepare to update option." lightbox="media/how-to-manage-sensors-on-the-cloud/prepare-to-update.png":::
For more information, see:
- [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md) - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) - [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md)-- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
defender-for-iot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/workbooks.md
Title: Use Azure Monitor workbooks in Microsoft Defender for IoT description: Learn how to view and create Azure Monitor workbooks for Defender for IoT data. Previously updated : 06/02/2022 Last updated : 09/04/2022 # Use Azure Monitor workbooks in Microsoft Defender for IoT
To view out-of-the-box workbooks created by Microsoft, or other workbooks alread
1. In the Azure portal, go to **Defender for IoT** and select **Workbooks** on the left.
- :::image type="content" source="media/release-notes/workbooks.png" alt-text="Screenshot of the new Workbooks page." lightbox="media/release-notes/workbooks.png":::
+ :::image type="content" source="media/workbooks/workbooks.png" alt-text="Screenshot of the Workbooks page." lightbox="media/release-notes/workbooks.png":::
1. Modify your filtering options if needed, and select a workbook to open it.
Defender for IoT provides the following workbooks out-of-the-box:
- **Sensor health**. Displays data about your sensor health, such as the sensor console software versions installed on your sensors. - **Alerts**. Displays data about alerts occurring on your sensors, including alerts by sensor, alert types, recent alerts generated, and more. - **Devices**. Displays data about your device inventory, including devices by vendor, subtype, and new devices identified.-
+- **Vulnerabilities**. Displays data about the Vulnerabilities detected in OT devices across your network. Select an item in the **Device vulnerabilities**, **Vulnerable devices**, or **Vulnerable components** tables to view related information in the tables on the right.
## Create custom workbooks
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md
While designing models to reflect the entities in your environment, it can be us
[!INCLUDE [Azure Digital Twins: validate models info](../../includes/digital-twins-validate.md)]
-### Use modeling tools
+### Upload and delete models in bulk
-There are several sample projects available that you can use to simplify dealing with models and ontologies. They're located in this repository: [Tools for Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-tools).
+Here are two sample projects that can simplify dealing with multiple models at once:
+* [Model uploader](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#uploadmodels): Once you're finished creating, extending, or selecting your models, you need to upload them to your Azure Digital Twins instance to make them available for use in your solution. If you have many models to upload, or if they have many interdependencies that would make ordering individual uploads complicated, you can use this model uploader sample to upload many models at once.
+* [Model deleter](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#deletemodels): This sample can be used to delete all models in an Azure Digital Twins instance at once. It contains recursive logic to handle model dependencies through the deletion process.
-Here are some of the tools included in the sample repository:
+### Visualize models
-| Link to tool | Description |
-| | |
-| [Model uploader](https://github.com/Azure/opendigitaltwins-tools/tree/master/ADTTools#uploadmodels) | Once you're finished creating, extending, or selecting your models, you need to upload them to your Azure Digital Twins instance to make them available for use in your solution. However, if you have many models to uploadΓÇöor if they have many interdependencies that would make ordering individual uploads complicatedΓÇöyou can use this model uploader sample to upload many models at once. |
-| [Model visualizer](https://github.com/Azure/opendigitaltwins-tools/tree/master/AdtModelVisualizer) | Once you have uploaded models into your Azure Digital Twins instance, you can view the models in your Azure Digital Twins instance, including any inheritance and model relationships, using the model visualizer sample. This sample is currently in a draft state. We encourage the digital twins development community to extend and contribute to the sample. |
+Once you have uploaded models into your Azure Digital Twins instance, you can use [Azure Digital Twins Explorer](http://explorer.digitaltwins.azure.net/) to view them. The explorer contains a list of all models in the instance, as well as a **model graph** that illustrates how they relate to each other, including any inheritance and model relationships.
+
+Here's an example of what a model graph might look like:
++
+For more information about the model experience in Azure Digital Twins Explorer, see [Explore models and the Model Graph](how-to-use-azure-digital-twins-explorer.md#explore-models-and-the-model-graph).
## Next steps
digital-twins Concepts Ontologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies.md
No matter which strategy you choose for integrating an ontology into Azure Digit
Reading this series of articles will guide you in how to use your models in your Azure Digital Twins instance. >[!TIP]
-> You can visualize the models in your ontology using the [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md) or [Azure Digital Twins Model Visualizer](https://github.com/Azure/opendigitaltwins-tools/tree/master/AdtModelVisualizer).
+> You can visualize the models in your ontology using the [model graph](how-to-use-azure-digital-twins-explorer.md#explore-models-and-the-model-graph) in Azure Digital Twins Explorer.
## Next steps
dms Faq Mysql Single To Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/faq-mysql-single-to-flex.md
Title: FAQ about using Azure Database Migration Service for Azure Database MySQL
description: Frequently asked questions about using Azure Database Migration Service to perform database migrations from Azure Database MySQL Single Server to Flexible Server. -+ -+ Previously updated : 09/08/2022 Last updated : 09/17/2022 # Frequently Asked Questions (FAQs)
dms Known Issues Azure Sql Db Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-db-managed-instance-online.md
Last updated 02/20/2020
Known issues and limitations that are associated with online migrations from SQL Server to Azure SQL Managed Instance are described below. > [!IMPORTANT]
-> With online migrations of SQL Server to Azure SQL Database, migration of SQL_variant data types is not supported.
+> With online migrations of SQL Server to Azure SQL Managed Instance, migration of SQL_variant data types is not supported.
## Backup requirements
dms Tutorial Mysql Azure Single To Flex Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-offline-portal.md
Title: "Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server offline using DMS via the Azure portal" description: "Learn to perform an offline migration from Azure Database for MySQL - Single Server to Flexible Server by using Azure Database Migration Service."--+++ - Last updated 09/17/2022
dms Tutorial Mysql Azure Single To Flex Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-online-portal.md
Title: "Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server online using DMS via the Azure portal" description: "Learn to perform an online migration from Azure Database for MySQL - Single Server to Flexible Server by using Azure Database Migration Service."--+++ - Previously updated : 09/16/2022 Last updated : 09/17/2022
dns Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/cli-samples.md
Title: Azure CLI samples for DNS - Azure DNS description: With this sample, use Azure CLI to create DNS zones and records in Azure DNS. -+ Previously updated : 09/20/2019- Last updated : 09/27/2022+
dns Delegate Subdomain Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/delegate-subdomain-ps.md
Title: Delegate a subdomain - Azure PowerShell - Azure DNS description: With this learning path, get started delegating an Azure DNS subdomain using Azure PowerShell. -+ Previously updated : 05/03/2021- Last updated : 09/27/2022+
dns Delegate Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/delegate-subdomain.md
Title: Delegate a subdomain - Azure DNS description: With this learning path, get started delegating an Azure DNS subdomain. -+ Previously updated : 05/03/2021- Last updated : 09/27/2022+ # Delegate an Azure DNS subdomain
dns Dns Alerts Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alerts-metrics.md
Azure DNS provides the following metrics to Azure Monitor for your DNS zones:
For more information, see [metrics definition](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkdnszones).
->[!NOTE]
+> [!NOTE]
> At this time, these metrics are only available for Public DNS zones hosted in Azure DNS. If you have Private Zones hosted in Azure DNS, these metrics won't provide data for those zones. In addition, the metrics and alerting feature is only supported in Azure Public cloud. Support for sovereign clouds will follow at a later time. The most granular element that you can see metrics for is a DNS zone. You currently can't see metrics for individual resource records within a zone.
To view this metric, select **Metrics** explorer experience from the **Monitor**
## Alerts in Azure DNS
-Azure Monitor has alerting that you can configure for each available metric value. See [Azure Monitor alerts](../azure-monitor/alerts/alerts-metric.md) for more information.
+Azure Monitor has alerting that you can configure for each available metric value. For more information, see [Azure Monitor alerts](../azure-monitor/alerts/alerts-metric.md).
1. To configure alerting for Azure DNS zones, select **Alerts** from *Monitor* page in the Azure portal. Then select **+ New alert rule**.
dns Dns Alias Appservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alias-appservice.md
Title: Host load-balanced Azure web apps at the zone apex description: Use an Azure DNS alias record to host load-balanced web apps at the zone apex -+ Previously updated : 04/27/2021- Last updated : 09/27/2022+ # Host load-balanced Azure web apps at the zone apex
dns Dns Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alias.md
Title: Alias records overview - Azure DNS description: In this article, learn about support for alias records in Microsoft Azure DNS. -+ Previously updated : 04/23/2021- Last updated : 09/27/2022+ # Azure DNS alias records overview
An alias record set is supported for the following record types in an Azure DNS
- CNAME > [!NOTE]
-> If you intend to use an alias record for the A or AAAA record types to point to an [Azure Traffic Manager profile](../traffic-manager/quickstart-create-traffic-manager-profile.md) you must make sure that the Traffic Manager profile has only [external endpoints](../traffic-manager/traffic-manager-endpoint-types.md#external-endpoints). You must provide the IPv4 or IPv6 address for external endpoints in Traffic Manager. You can't use fully-qualified domain names (FQDNs) in endpoints. Ideally, use static IP addresses.
+> If you intend to use an alias record for the A or AAAA record types to point to an [Azure Traffic Manager profile](../traffic-manager/quickstart-create-traffic-manager-profile.md) you must make sure that the Traffic Manager profile has only [external endpoints](../traffic-manager/traffic-manager-endpoint-types.md#external-endpoints). You must provide the IPv4 or IPv6 address for external endpoints in Traffic Manager. You can't use fully qualified domain names (FQDNs) in endpoints. Ideally, use static IP addresses.
## Capabilities
dns Dns Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-custom-domain.md
Title: Integrate Azure DNS with your Azure resources - Azure DNS description: In this article, learn how to use Azure DNS along to provide DNS for your Azure resources. -+ Previously updated : 12/08/2021- Last updated : 09/27/2022+ # Use Azure DNS to provide custom domain settings for an Azure service
dns Dns Delegate Domain Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-delegate-domain-azure-dns.md
Title: 'Tutorial: Host your domain in Azure DNS' description: In this tutorial, you learn how to configure Azure DNS to host your DNS zones using Azure portal. -+ Previously updated : 06/10/2022- Last updated : 09/27/2022+ #Customer intent: As an experienced network administrator, I want to configure Azure DNS, so I can host DNS zones.
dns Dns Domain Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-domain-delegation.md
Title: Azure DNS delegation overview description: Understand how to change domain delegation and use Azure DNS name servers to provide domain hosting. -+ Previously updated : 04/19/2021- Last updated : 09/27/2022+
dns Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-for-azure-services.md
Title: Use Azure DNS with other Azure services
description: In this learning path, get started on how to use Azure DNS to resolve names for other Azure services documentationcenter: na-+ tags: azure dns
na Previously updated : 05/03/2021- Last updated : 09/27/2022+ # How Azure DNS works with other Azure services
dns Dns Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-bicep.md
Title: 'Quickstart: Create an Azure DNS zone and record - Bicep'
description: Learn how to create a DNS zone and record in Azure DNS. This is a step-by-step quickstart to create and manage your first DNS zone and record using Bicep. -- Previously updated : 03/21/2022++ Last updated : 09/27/2022
dns Dns Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-template.md
Title: 'Quickstart: Create an Azure DNS zone and record - Azure Resource Manager template (ARM template)'
-description: Learn how to create a DNS zone and record in Azure DNS. This is a step-by-step quickstart to create and manage your first DNS zone and record using Azure Resource Manager template (ARM template).
+description: Learn how to create a DNS zone and record in Azure DNS. This article is a step-by-step quickstart to create and manage your first DNS zone and record using Azure Resource Manager template (ARM template).
Previously updated : 6/2/2021 Last updated : 09/27/2022
The host name `www.2lwynbseszpam.azurequickstart.org` resolves to `1.2.3.4` and
## Clean up resources
-When you no longer need the resources that you created with the DNS zone, delete the resource group. This removes the DNS zone and all the related resources.
+When you no longer need the resources that you created with the DNS zone, delete the resource group. This action removes the DNS zone and all the related resources.
To delete the resource group, call the `Remove-AzResourceGroup` cmdlet:
dns Dns Getstarted Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-cli.md
Title: 'Quickstart: Create an Azure DNS zone and record - Azure CLI'
description: Quickstart - Learn how to create a DNS zone and record in Azure DNS. This is a step-by-step guide to create and manage your first DNS zone and record using the Azure CLI. -+ Previously updated : 10/20/2020- Last updated : 09/27/2022+ #Customer intent: As an administrator or developer, I want to learn how to configure Azure DNS using the Azure CLI so I can use Azure DNS for my name resolution.
az network dns zone create -g MyResourceGroup -n contoso.xyz
To create a DNS record, use the `az network dns record-set [record type] add-record` command. For help on A records, see `azure network dns record-set A add-record -h`.
-The following example creates a record with the relative name "www" in the DNS Zone "contoso.xyz" in the resource group "MyResourceGroup". The fully-qualified name of the record set is "www.contoso.xyz". The record type is "A", with IP address "10.10.10.10", and a default TTL of 3600 seconds (1 hour).
+The following example creates a record with the relative name "www" in the DNS Zone "contoso.xyz" in the resource group "MyResourceGroup". The fully qualified name of the record set is "www.contoso.xyz". The record type is "A", with IP address "10.10.10.10", and a default TTL of 3600 seconds (1 hour).
```azurecli az network dns record-set a add-record -g MyResourceGroup -z contoso.xyz -n www -a 10.10.10.10
dns Dns Getstarted Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-portal.md
Title: 'Quickstart: Create a DNS zone and record - Azure portal'
description: Use this step-by-step quickstart guide to learn how to create an Azure DNS zone and record using the Azure portal. -- Previously updated : 04/23/2021++ Last updated : 09/27/2022
dns Dns Getstarted Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-powershell.md
Title: 'Quickstart: Create an Azure DNS zone and record - Azure PowerShell'
description: Learn how to create a DNS zone and record in Azure DNS. This is a step-by-step quickstart to create and manage your first DNS zone and record using Azure PowerShell. -- Previously updated : 07/21/2022++ Last updated : 09/27/2022
dns Dns Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-import-export.md
Title: Import and export a domain zone file - Azure CLI
description: Learn how to import and export a DNS (Domain Name System) zone file to Azure DNS by using Azure CLI -+ Previously updated : 04/29/2021- Last updated : 09/27/2022+
dns Dns Operations Dnszones Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-dnszones-cli.md
Title: Manage DNS zones in Azure DNS - Azure CLI | Microsoft Docs
description: You can manage DNS zones using Azure CLI. This article shows how to update, delete, and create DNS zones on Azure DNS. documentationcenter: na-+ ms.devlang: azurecli na Previously updated : 04/28/2021- Last updated : 09/27/2022+
dns Dns Operations Dnszones Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-dnszones-portal.md
Title: Manage DNS zones in Azure DNS - Azure portal | Microsoft Docs
description: You can manage DNS zones using the Azure portal. This article describes how to update, delete, and create DNS zones on Azure DNS documentationcenter: na-+ na Previously updated : 04/28/2021- Last updated : 09/27/2022+ # How to manage DNS Zones in the Azure portal
dns Dns Operations Dnszones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-dnszones.md
Title: Manage DNS zones in Azure DNS - PowerShell | Microsoft Docs
description: You can manage DNS zones using Azure PowerShell. This article describes how to update, delete, and create DNS zones on Azure DNS documentationcenter: na-+ na Previously updated : 04/27/2021- Last updated : 09/27/2022+
$zone.Tags.Add("status","approved")
Set-AzDnsZone -Zone $zone ```
-When using `Set-AzDnsZone` with a $zone object, [Etag checks](dns-zones-records.md#etags) are used to ensure concurrent changes aren't overwritten. You can use the optional `-Overwrite` switch to suppress these checks.
+When you use `Set-AzDnsZone` with a $zone object, [Etag checks](dns-zones-records.md#etags) are used to ensure concurrent changes aren't overwritten. You can use the optional `-Overwrite` switch to suppress these checks.
## Delete a DNS Zone
dns Dns Operations Recordsets Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets-cli.md
Title: Manage DNS records in Azure DNS using the Azure CLI description: Managing DNS record sets and records on Azure DNS when hosting your domain on Azure DNS.-+ ms.assetid: 5356a3a5-8dec-44ac-9709-0c2b707f6cb5 ms.devlang: azurecli Previously updated : 04/28/2021- Last updated : 09/27/2022+ # Manage DNS records and recordsets in Azure DNS using the Azure CLI
To remove a DNS record from an existing record set, use `az network dns record-s
This command deletes a DNS record from a record set. If the last record in a record set is deleted, the record set itself is also deleted. To keep the empty record set instead, use the `--keep-empty-record-set` option.
-When using the `az network dns record-set <record-type> add-record` command, you need to specify the record getting deleted and the zone to delete from. These parameters are described in [Create a DNS record](#create-a-dns-record) and [Create records of other types](#create-records-of-other-types) above.
+When you use the `az network dns record-set <record-type> add-record` command, you need to specify the record getting deleted and the zone to delete from. These parameters are described in [Create a DNS record](#create-a-dns-record) and [Create records of other types](#create-records-of-other-types) above.
The following example deletes the A record with value '1.2.3.4' from the record set named *www* in the zone *contoso.com*, in the resource group *MyResourceGroup*.
dns Dns Operations Recordsets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets-portal.md
Title: Manage DNS record sets and records with Azure DNS description: Azure DNS provides the capability to manage DNS record sets and records when hosting your domain. -+ Previously updated : 04/28/2021- Last updated : 09/27/2022+ # Manage DNS records and record sets by using the Azure portal
dns Dns Operations Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets.md
Title: Manage DNS records in Azure DNS using Azure PowerShell | Microsoft Docs
description: Managing DNS record sets and records on Azure DNS when hosting your domain on Azure DNS. All PowerShell commands for operations on record sets and records. documentationcenter: na-+ na Previously updated : 04/28/2021- Last updated : 09/27/2022+ # Manage DNS records and recordsets in Azure DNS using Azure PowerShell
The steps for modifying an existing record set are similar to the steps you take
* Changing the record set metadata and time to live (TTL) 3. Commit your changes by using the `Set-AzDnsRecordSet` cmdlet. This *replaces* the existing record set in Azure DNS with the record set specified.
-When using `Set-AzDnsRecordSet`, [Etag checks](dns-zones-records.md#etags) are used to ensure concurrent changes aren't overwritten. You can use the optional `-Overwrite` switch to suppress these checks.
+When you use the `Set-AzDnsRecordSet` command, [Etag checks](dns-zones-records.md#etags) are used to ensure concurrent changes aren't overwritten. You can use the optional `-Overwrite` switch to suppress these checks.
### To update a record in an existing record set
dns Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-overview.md
Title: What is Azure DNS? description: Overview of DNS hosting service on Microsoft Azure. Host your domain on Microsoft Azure.-+ Previously updated : 4/22/2021- Last updated : 09/27/2022+ #Customer intent: As an administrator, I want to evaluate Azure DNS so I can determine if I want to use it instead of my current DNS service.
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
description: In this quickstart, you create and test a private DNS resolver in A
Previously updated : 06/02/2022 Last updated : 09/27/2022
Next, add a virtual network to the resource group that you created, and configur
5. Select the **Outbound Endpoints** tab, select **Add an endpoint**, and then enter a name next to **Endpoint name** (ex: myoutboundendpoint). 6. Next to **Subnet**, select the outbound endpoint subnet you created (ex: snet-outbound, 10.1.1.0/28) and then select **Save**. 7. Select the **Ruleset** tab, select **Add a ruleset**, and enter the following:
- - Ruleset name: Enter a name for your ruleset (ex: myruleset).
+ - Ruleset name: Enter a name for your ruleset (ex: **myruleset**).
- Endpoints: Select the outbound endpoint that you created (ex: myoutboundendpoint). 8. Under **Rules**, select **Add** and enter your conditional DNS forwarding rules. For example: - Rule name: Enter a rule name (ex: contosocom).
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 09/20/2022 Last updated : 09/27/2022
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 09/22/2022 Last updated : 09/27/2022 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
dns Dns Protect Private Zones Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-protect-private-zones-recordsets.md
Previously updated : 05/07/2021 Last updated : 09/27/2022 ms.devlang: azurecli
dns Dns Protect Zones Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-protect-zones-recordsets.md
Previously updated : 05/05/2021 Last updated : 09/27/2022 ms.devlang: azurecli
dns Dns Reverse Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-for-azure-services.md
Title: Reverse DNS for Azure services - Azure DNS
description: With this learning path, get started configuring reverse DNS lookups for services hosted in Azure. documentationcenter: na-+ na Previously updated : 04/29/2021- Last updated : 09/27/2022+
dns Dns Reverse Dns Hosting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-hosting.md
Title: Host reverse DNS lookup zones in Azure DNS description: Learn how to use Azure DNS to host the reverse DNS lookup zones for your IP ranges-+ Previously updated : 04/29/2021- Last updated : 09/27/2022+ ms.devlang: azurecli
dns Dns Reverse Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-overview.md
Title: Overview of reverse DNS in Azure - Azure DNS
description: In this learning path, get started learning how reverse DNS works and how it can be used in Azure documentationcenter: na-+ na Previously updated : 04/26/2021- Last updated : 09/27/2022+ # Overview of reverse DNS and support in Azure
dns Dns Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-sdk.md
description: In this learning path, get started creating DNS zones and record sets in Azure DNS by using the .NET SDK. documentationcenter: na-+ ms.assetid: eed99b87-f4d4-4fbf-a926-263f7e30b884
ms.devlang: csharp
na Previously updated : 05/05/2021- Last updated : 09/27/2022+
dns Dns Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-troubleshoot.md
Title: Troubleshooting guide - Azure DNS description: In this learning path, get started troubleshooting common issues with Azure DNS -+ Previously updated : 11/10/2021- Last updated : 09/27/2022+ # Azure DNS troubleshooting guide
dns Dns Web Sites Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-web-sites-custom-domain.md
Title: 'Tutorial: Create custom Azure DNS records for a web app' description: In this tutorial, you learn how to create custom domain DNS records for web apps using Azure DNS. -+ Previously updated : 06/10/2022- Last updated : 09/27/2022+ #Customer intent: As an experienced network administrator, I want to create DNS records in Azure DNS, so I can host a web app in a custom domain.
dns Dns Zones Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-zones-records.md
Title: DNS Zones and Records overview - Azure DNS description: Overview of support for hosting DNS zones and records in Microsoft Azure DNS.-+ ms.assetid: be4580d7-aa1b-4b6b-89a3-0991c0cda897 Previously updated : 04/20/2021- Last updated : 09/27/2022+ # Overview of DNS zones and records
TXT records are used to map domain names to arbitrary text strings. They're used
The DNS standards permit a single TXT record to contain multiple strings, each of which may be up to 255 characters in length. Where multiple strings are used, they are concatenated by clients and treated as a single string.
-When calling the Azure DNS REST API, you need to specify each TXT string separately. When using the Azure portal, PowerShell or CLI interfaces you should specify a single string per record, which is automatically divided into 255-character segments if necessary.
+When calling the Azure DNS REST API, you need to specify each TXT string separately. When you use the Azure portal, PowerShell, or CLI interfaces, you should specify a single string per record. This string is automatically divided into 255-character segments if necessary.
The multiple strings in a DNS record shouldn't be confused with the multiple TXT records in a TXT record set. A TXT record set can contain multiple records, *each of which* can contain multiple strings. Azure DNS supports a total string length of up to 1024 characters in each TXT record set (across all records combined).
dns Private Dns Autoregistration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-autoregistration.md
Title: What is auto registration feature in Azure DNS private zones? description: Overview of auto registration feature in Azure DNS private zones. -+ Previously updated : 04/26/2021- Last updated : 09/27/2022+ # What is the auto registration feature in Azure DNS private zones?
dns Private Dns Getstarted Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-cli.md
Title: Quickstart - Create an Azure private DNS zone using the Azure CLI description: In this quickstart, you create and test a private DNS zone and record in Azure DNS. This is a step-by-step guide to create and manage your first private DNS zone and record using Azure CLI. -+ Previously updated : 05/23/2022- Last updated : 09/27/2022+ #Customer intent: As an experienced network administrator, I want to create an Azure private DNS zone, so I can resolve host names on my private virtual networks.
dns Private Dns Getstarted Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-portal.md
description: In this quickstart, you create and test a private DNS zone and reco
Previously updated : 05/18/2022 Last updated : 09/27/2022
dns Private Dns Getstarted Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-powershell.md
Title: Quickstart - Create an Azure private DNS zone using Azure PowerShell description: In this quickstart, you learn how to create and manage your first private DNS zone and record using Azure PowerShell. -- Previously updated : 05/23/2022++ Last updated : 09/27/2022
dns Private Dns Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-import-export.md
description: Learn how to import and export a DNS zone file to Azure private DN
Previously updated : 03/16/2021 Last updated : 09/27/2022
dns Private Dns Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-migration-guide.md
Title: Migrating legacy Azure DNS private zones to the new resource model
-description: This guide provides step by step instruction on how to migrate legacy private DNS zones to the latest resource model
+description: This guide provides step by step instruction on how to migrate legacy private DNS zones to latest resource model
Previously updated : 09/08/2022 Last updated : 09/27/2022
dns Private Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-overview.md
Previously updated : 09/20/2022 Last updated : 09/27/2022 #Customer intent: As an administrator, I want to evaluate Azure Private DNS so I can determine if I want to use it instead of my current DNS service.
dns Private Dns Privatednszone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-privatednszone.md
Previously updated : 09/08/2022 Last updated : 09/27/2022
dns Private Dns Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-scenarios.md
Title: Scenarios for Private Zones - Azure DNS description: In this article, learn about common scenarios for using Azure DNS Private Zones. -+ Previously updated : 04/27/2021- Last updated : 09/27/2022+ # Azure DNS private zones scenarios
dns Private Dns Virtual Network Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-virtual-network-links.md
Title: What is a virtual network link subresource of Azure DNS private zones description: Overview of virtual network link sub resource an Azure DNS private zone -+ Previously updated : 04/26/2021- Last updated : 09/27/2022+ # What is a virtual network link?
dns Dns Cli Create Dns Zone Record https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/dns-cli-create-dns-zone-record.md
Title: Create a DNS zone and record for a domain name - Azure CLI - Azure DNS description: This Azure CLI script example shows how to create a DNS zone and record for a domain name -+ Previously updated : 09/20/2019- Last updated : 09/27/2022+
dns Find Unhealthy Dns Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/find-unhealthy-dns-records.md
Title: Find unhealthy DNS records in Azure DNS - PowerShell script sample description: In this article, learn how to use an Azure PowerShell script to find unhealthy DNS records.-- Previously updated : 11/10/2021++ Last updated : 09/27/2022
dns Tutorial Alias Pip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-pip.md
Title: 'Tutorial: Create an Azure DNS alias record to refer to an Azure public IP address' description: In this tutorial, you learn how to configure an Azure DNS alias record to reference an Azure public IP address. -+ Previously updated : 06/20/2022- Last updated : 09/27/2022+ #Customer intent: As an experienced network administrator, I want to configure Azure an DNS alias record to refer to an Azure public IP address.
dns Tutorial Alias Rr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-rr.md
Title: 'Tutorial: Create an alias record to refer to a resource record in a zone' description: In this tutorial, you learn how to configure an alias record to reference a resource record within the zone.--++ Previously updated : 06/10/2022 Last updated : 09/27/2022 #Customer intent: As an experienced network administrator, I want to configure Azure an DNS alias record to refer to a resource record within the zone.
dns Tutorial Alias Tm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-tm.md
Title: 'Tutorial: Create an alias record to support apex domain name with Traffi
description: In this tutorial, you learn how to create and configure an Azure DNS alias record to support using your apex domain name with Traffic Manager. -+ Previously updated : 06/20/2022- Last updated : 09/27/2022+ #Customer intent: As an experienced network administrator, I want to configure Azure DNS alias records to use my apex domain name with Traffic Manager.
dns Tutorial Public Dns Zones Child https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-public-dns-zones-child.md
ms.assetid: be4580d7-aa1b-4b6b-89a3-0991c0cda897
Previously updated : 06/10/2022 Last updated : 09/27/2022
hdinsight Hdinsight Hadoop Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-add-storage.md
description: Learn how to add additional Azure Storage accounts to an existing H
Previously updated : 04/05/2022 Last updated : 09/29/2022 # Add additional storage accounts to HDInsight
After removing these keys and saving the configuration, you need to restart Oozi
### Storage firewall
-If you choose to secure your storage account with the **Firewalls and virtual networks** restrictions on **Selected networks**, be sure to enable the exception **Allow trusted Microsoft services...** so that HDInsight can access your storage account`.`
+If you choose to secure your storage account with the **Firewalls and virtual networks** restrictions on **Selected networks**, be sure to enable the exception **Allow trusted Microsoft services** so that HDInsight can access your storage account.
### Unable to access storage after changing key
iot-central Concepts Device Implementation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md
The [Azure IoT device SDKs](#device-sdks) include support for the IoT Plug and P
### Device model
-A device model is defined by using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl) modeling language. This language lets you define:
+A device model is defined by using the [DTDL V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) modeling language. This language lets you define:
- The telemetry the device sends. The definition includes the name and data type of the telemetry. For example, a device sends temperature telemetry as a double. - The properties the device reports to IoT Central. A property definition includes its name and data type. For example, a device reports the state of a valve as a Boolean.
iot-central Concepts Faq Apaas Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-apaas-paas.md
After the migration, devices aren't automatically deleted from the IoT Central a
So that you can seamlessly migrate devices from your IoT Central applications to PaaS solution, follow these guidelines: -- The device must be an IoT Plug and Play device that uses a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) model. IoT Central requires all devices to have a DTDL model. This simplifies the interoperability between an IoT PaaS solution and IoT Central.
+- The device must be an IoT Plug and Play device that uses a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) model. IoT Central requires all devices to have a DTDL model. This simplifies the interoperability between an IoT PaaS solution and IoT Central.
- The device must follow the [IoT Central data formats for telemetry, property, and commands](concepts-telemetry-properties-commands.md).
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-telemetry-properties-commands.md
Each example shows a snippet from the device model that defines the type and exa
> [!NOTE] > IoT Central accepts any valid JSON but it can only be used for visualizations if it matches a definition in the device model. You can export data that doesn't match a definition, see [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).
-The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) v2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
+The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
For sample device code that shows some of these payloads in use, see the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial.
iot-central Howto Create Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-rules.md
You can configure an IoT Central application to continuously export telemetry to
Your Event Hubs namespace looks like the following screenshot:
-```:::image type="content" source="media/howto-create-custom-rules/event-hubs-namespace.png" alt-text="Screenshot of Event Hubs namespace." border="false":::
## Define the function
This solution uses an Azure Functions app to send an email notification when the
The portal creates a default function called **HttpTrigger1**:
-```:::image type="content" source="media/howto-create-custom-rules/default-function.png" alt-text="Screenshot of Edit HTTP trigger function.":::
1. Replace the C# code with the following code:
To test the function in the portal, first choose **Logs** at the bottom of the c
The function log messages appear in the **Logs** panel:
-```:::image type="content" source="media/howto-create-custom-rules/function-app-logs.png" alt-text="Function log output":::
After a few minutes, the **To** email address receives an email with the following content:
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
To learn how to manage device templates by using the IoT Central UI, see [How to
A device template contains a device model, cloud property definitions, and view definitions. The REST API lets you manage the device model and cloud property definitions. Use the UI to create and manage views.
-The device model section of a device template specifies the capabilities of a device you want to connect to your application. Capabilities include telemetry, properties, and commands. The model is defined using [DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
+The device model section of a device template specifies the capabilities of a device you want to connect to your application. Capabilities include telemetry, properties, and commands. The model is defined using [DTDL V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
## Device templates REST API
iot-central Howto Use Location Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-location-data.md
The following screenshot shows a device template with examples of a device prope
:::image type="content" source="media/howto-use-location-data/location-device-template.png" alt-text="Screenshot showing location property definition in device template" lightbox="media/howto-use-location-data/location-device-template.png":::
-For reference, the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) definitions for these capabilities look like the following snippet:
+For reference, the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) definitions for these capabilities look like the following snippet:
```json {
iot-develop Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-architecture.md
The following diagram shows the key elements of an IoT Plug and Play solution:
## Model repository
-The [model repository](./concepts-model-repository.md) is a store for model and interface definitions. You define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl).
+The [model repository](./concepts-model-repository.md) is a store for model and interface definitions. You define models and interfaces using the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
The web UI lets you manage the models and interfaces.
iot-develop Concepts Convention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-convention.md
IoT Plug and Play devices should follow a set of conventions when they exchange
Devices can include [modules](../iot-hub/iot-hub-devguide-module-twins.md), or be implemented in an [IoT Edge module](../iot-edge/about-iot-edge.md) hosted by the IoT Edge runtime.
-You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language v2 (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) _model_. There are two types of model referred to in this article:
+You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) _model_. There are two types of model referred to in this article:
- **No component** - A model with no components. The model declares telemetry, properties, and commands as top-level properties in the contents section of the main interface. In the Azure IoT explorer tool, this model appears as a single _default component_. - **Multiple components** - A model composed of two or more interfaces. A main interface, which appears as the _default component_, with telemetry, properties, and commands. One or more interfaces declared as components with additional telemetry, properties, and commands.
On a device or module, multiple component interfaces use command names with the
Now that you've learned about IoT Plug and Play conventions, here are some additional resources: -- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl)
+- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md)
- [C device SDK](/azure/iot-hub/iot-c-sdk-ref/) - [IoT REST API](/rest/api/iothub/device) - [IoT Plug and Play modeling guide](concepts-modeling-guide.md)
iot-develop Concepts Developer Guide Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-device.md
This guide describes the basic steps required to create a device, module, or IoT
To build an IoT Plug and Play device, module, or IoT Edge module, follow these steps: 1. Ensure your device is using either the MQTT or MQTT over WebSockets protocol to connect to Azure IoT Hub.
-1. Create a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md).
+1. Create a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md).
1. Update your device or module to announce the `model-id` as part of the device connection. 1. Implement telemetry, properties, and commands using the [IoT Plug and Play conventions](concepts-convention.md)
Once your device or module implementation is ready, use the [Azure IoT explorer]
Now that you've learned about IoT Plug and Play device development, here are some additional resources: -- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl)
+- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md)
- [C device SDK](/azure/iot-hub/iot-c-sdk-ref/) - [IoT REST API](/rest/api/iothub/device) - [Understand components in IoT Plug and Play models](concepts-modeling-guide.md)
iot-develop Concepts Developer Guide Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-service.md
The service SDKs let you access device information from a solution, such as a de
Now that you've learned about device modeling, here are some additional resources: -- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl)
+- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md)
- [C device SDK](/azure/iot-hub/iot-c-sdk-ref/) - [IoT REST API](/rest/api/iothub/device) - [IoT Plug and Play modeling guide](concepts-modeling-guide.md)
iot-develop Concepts Digital Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-digital-twin.md
# Understand IoT Plug and Play digital twins
-An IoT Plug and Play device implements a model described by the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) schema. A model describes the set of components, properties, commands, and telemetry messages that a particular device can have.
+An IoT Plug and Play device implements a model described by the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) schema. A model describes the set of components, properties, commands, and telemetry messages that a particular device can have.
-IoT Plug and Play uses DTDL version 2. For more information about this version, see the [Digital Twins Definition Language (DTDL) - version 2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) specification on GitHub.
+IoT Plug and Play uses DTDL version 2. For more information about this version, see the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) specification on GitHub.
> [!NOTE] > DTDL isn't exclusive to IoT Plug and Play. Other IoT services, such as [Azure Digital Twins](../digital-twins/overview.md), use it to represent entire environments such as buildings and energy networks.
iot-develop Concepts Model Parser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-parser.md
# Understand the digital twins model parser
-The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification](https://github.com/Azure/opendigitaltwins-dtdl). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a model defined in multiple files.
+The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a model defined in multiple files.
## Install the DTDL model parser
iot-develop Concepts Model Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-repository.md
# Device models repository
-The device models repository (DMR) enables device builders to manage and share IoT Plug and Play device models. The device models are JSON LD documents defined using the [Digital Twins Modeling Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
+The device models repository (DMR) enables device builders to manage and share IoT Plug and Play device models. The device models are JSON LD documents defined using the [Digital Twins Modeling Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
The DMR defines a pattern to store DTDL interfaces in a folder structure based on the device twin model identifier (DTMI). You can locate an interface in the DMR by converting the DTMI to a relative path. For example, the `dtmi:com:example:Thermostat;1` DTMI translates to `/dtmi/com/example/thermostat-1.json` and can be obtained from the public base URL `devicemodels.azure.com` at the URL [https://devicemodels.azure.com/dtmi/com/example/thermostat-1.json](https://devicemodels.azure.com/dtmi/com/example/thermostat-1.json).
iot-develop Concepts Modeling Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-modeling-guide.md
At the core of IoT Plug and Play, is a device _model_ that describes a device's
To learn more about how IoT Plug and Play uses device models, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md) and [IoT Plug and Play service developer guide](concepts-developer-guide-service.md).
-To define a model, you use the Digital Twins Definition Language (DTDL). DTDL uses a JSON variant called [JSON-LD](https://json-ld.org/). The following snippet shows the model for a thermostat device that:
+To define a model, you use the Digital Twins Definition Language (DTDL) V2. DTDL uses a JSON variant called [JSON-LD](https://json-ld.org/). The following snippet shows the model for a thermostat device that:
- Has a unique model ID: `dtmi:com:example:Thermostat;1`. - Sends temperature telemetry.
iot-develop Howto Convert To Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-convert-to-pnp.md
In summary, the sample implements the following capabilities:
## Design a model
-Every IoT Plug and Play device has a model that describes the features and capabilities of the device. The model uses the [Digital Twin Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) to describe the device capabilities.
+Every IoT Plug and Play device has a model that describes the features and capabilities of the device. The model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) to describe the device capabilities.
For a simple model that maps the existing capabilities of your device, use the *Telemetry*, *Property*, and *Command* DTDL elements.
iot-develop Howto Manage Digital Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-manage-digital-twin.md
At the time of writing, the digital twin API version is `2020-09-30`.
## Update a digital twin
-An IoT Plug and Play device implements a model described by [Digital Twins Definition Language v2 (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl). Solution developers can use the **Update Digital Twin API** to update the state of component and the properties of the digital twin.
+An IoT Plug and Play device implements a model described by [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl). Solution developers can use the **Update Digital Twin API** to update the state of component and the properties of the digital twin.
The IoT Plug and Play device used as an example in this article implements the [Temperature Controller model](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) with [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) components.
iot-develop Overview Iot Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/overview-iot-plug-and-play.md
IoT Plug and Play enables solution builders to integrate IoT devices with their
You can group these elements in interfaces to reuse across models to make collaboration easier and to speed up development.
-To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling.
+To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling.
There's no extra cost for using IoT Plug and Play and DTDL. Standard rates for [Azure IoT Hub](../iot-hub/about-iot-hub.md) and other Azure services remain the same.
iot-develop Tutorial Migrate Device To Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-migrate-device-to-module.md
This tutorial shows you how to connect a generic IoT Plug and Play [module](../iot-hub/iot-hub-devguide-module-twins.md).
-A device is an IoT Plug and Play device if it publishes its model ID when it connects to an IoT hub and implements the properties and methods described in the Digital Twins Definition Language (DTDL) model identified by the model ID. To learn more about how devices use a DTDL and model ID, see [IoT Plug and Play developer guide](./concepts-developer-guide-device.md). Modules use model IDs and DTDL models in the same way.
+A device is an IoT Plug and Play device if it publishes its model ID when it connects to an IoT hub and implements the properties and methods described in the Digital Twins Definition Language (DTDL) V2 model identified by the model ID. To learn more about how devices use a DTDL and model ID, see [IoT Plug and Play developer guide](./concepts-developer-guide-device.md). Modules use model IDs and DTDL models in the same way.
To demonstrate how to implement an IoT Plug and Play module, this tutorial shows you how to:
iot-edge How To Retrieve Iot Edge Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
The [Logger class in IoT Edge](https://github.com/Azure/iotedge/blob/master/edge
Use the **GetModuleLogs** direct method to retrieve the logs of an IoT Edge module. >[!TIP]
+>Use the `since` and `until` filter options to limit the range of logs retrieved. Calling this direct method without bounds retrieves all the logs which may be large, time consuming, or costly.
+>
>The IoT Edge troubleshooting page in the Azure portal provides a simplified experience for viewing module logs. For more information, see [Monitor and troubleshoot IoT Edge devices from the Azure portal](troubleshoot-in-portal.md). This method accepts a JSON payload with the following schema:
Use the **UploadModuleLogs** direct method to send the requested logs to a speci
::: moniker range=">=iotedge-2020-11" > [!NOTE]
+> Use the `since` and `until` filter options to limit the range of logs retrieved. Calling this direct method without bounds retrieves all the logs which may be large, time consuming, or costly.
+>
> If you wish to upload logs from a device behind a gateway device, you will need to have the [API proxy and blob storage modules](how-to-configure-api-proxy-module.md) configured on the top layer device. These modules route the logs from your lower layer device through your gateway device to your storage in the cloud. ::: moniker-end
iot-hub-device-update Device Update Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md
IoT Hub device twin example:
"deviceProperties": { "manufacturer": "contoso", "model": "virtual-vacuum-v1",
- "interfaceId": "dtmi:azure:iot:deviceUpdate;1",
+ "interfaceId": "dtmi:azure:iot:deviceUpdateModel;1",
"aduVer": "DU;agent/0.8.0-rc1-public-preview", "doVer": "DU;lib/v0.6.0+20211001.174458.c8c4051,DU;agent/v0.6.0+20211001.174418.c8c4051" },
iot-hub Monitor Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/monitor-iot-hub.md
The **Overview** page in the Azure portal for each IoT hub includes charts that
:::image type="content" source="media/monitor-iot-hub/overview-portal.png" alt-text="Default metric charts on IoT hub Overview page.":::
-Be aware that the message count value can be delayed by 1 minute, and that, for reasons having to do with the IoT Hub service infrastructure, the value can sometimes bounce between higher and lower values on refresh. This counter should only be incorrect for values accrued over the last minute.
+A correct message count value might be delayed by 1 minute. Due to the IoT Hub service infrastructure, the value can sometimes bounce between higher and lower values on refresh. This counter should be incorrect only for values accrued over the last minute.
-The information presented on the Overview pane is useful, but represents only a small amount of the monitoring data that is available for an IoT hub. Some monitoring data is collected automatically and is available for analysis as soon as you create your IoT hub. You can enable additional types of data collection with some configuration.
+The information presented on the **Overview pane** is useful, but represents only a small amount of monitoring data that's available for an IoT hub. Some monitoring data is collected automatically and available for analysis as soon as you create your IoT hub. You can enable other types of data collection with some configuration.
## What is Azure Monitor?
-Azure IoT Hub creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises.
+Azure IoT Hub monitors data using [Azure Monitor](../azure-monitor/overview.md), a full stack monitoring service. Azure Monitor can monitor your Azure resources and other cloud or on-premises resources.
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts: - What is Azure Monitor?-- Costs associated with monitoring - Monitoring data collected in Azure - Configuring data collection-- Standard tools in Azure for analyzing and alerting on monitoring data
+- Metrics and logs
+- Standard tools in Azure for analysis and insights
+- Alerts fired when monitoring data
-The following sections build on this article by describing the specific data gathered for Azure IoT Hub and providing examples for configuring data collection and analyzing this data with Azure tools.
+For more information on the metrics and logs created by Azure IoT Hub, see [Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md).
-## Monitoring data
+> [!IMPORTANT]
+> The events emitted by the IoT Hub service using Azure Monitor resource logs aren't guaranteed to be reliable or ordered. Some events might be lost or delivered out of order. Resource logs aren't intended to be real-time, so it may take several minutes for events to be logged to your choice of destination.
-Azure IoT Hub collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
+The rest of this article builds on the **Monitoring Azure resources with Azure Monitor** article by describing the specific data gathered for Azure IoT Hub. You'll see examples for configuring your data collection and how to analyze this data with Azure tools.
-See [Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md) for detailed information on the metrics and logs created by Azure IoT Hub.
+## Collection and routing
-> [!IMPORTANT]
-> The events emitted by the IoT Hub service using Azure Monitor resource logs are not guaranteed to be reliable or ordered. Some events might be lost or delivered out of order. Resource logs also aren't meant to be real-time, and it may take several minutes for events to be logged to your choice of destination.
+Platform metrics, the Activity log, and resource logs have unique collection, storage, and routing specifications.
-## Collection and routing
+* Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+* Resource logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
-Resource logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+* Metrics and logs can be routed to several locations including:
+ - The Azure Monitor Logs store via an associated Log Analytics workspace. There they can be analyzed using Log Analytics.
+ - Azure Storage for archiving and offline analysis
+ - An Event Hubs endpoint where they can be read by external applications, for example, third-party security information and event management (SIEM) tools.
-Metrics and logs can be routed to several locations including:
-- The Azure Monitor Logs store via an associated Log Analytics workspace. There they can be analyzed using Log Analytics.-- Azure Storage for archiving and offline analysis -- An Event Hubs endpoint where they can be read by external applications, for example, third-party SIEM tools.
+In the Azure portal from your IoT hub under **Monitoring**, you can select **Diagnostic settings** followed by **Add diagnostic setting** to create diagnostic settings scoped to the logs and platform metrics emitted by your IoT hub.
-In Azure portal, you can select **Diagnostic settings** under **Monitoring** on the left-pane of your IoT hub followed by **Add diagnostic setting** to create diagnostic settings scoped to the logs and platform metrics emitted by your IoT hub.
The following screenshot shows a diagnostic setting for routing the resource log type *Connection Operations* and all platform metrics to a Log Analytics workspace.
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure IoT Hub are listed under [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs). Be aware that events are emitted only for errors in some categories.
+For more information on creating a diagnostic setting using the Azure portal, CLI, or PowerShell, see [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md). When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure IoT Hub are listed under [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs). Events are emitted only for errors in some categories.
-When routing IoT Hub platform metrics to other locations, be aware that:
+When routing IoT Hub platform metrics to other locations:
-- The following platform metrics are not exportable via diagnostic settings: *Connected devices (preview)* and *Total devices (preview)*.
+- These platform metrics aren't exportable via diagnostic settings: *Connected devices* and *Total devices*.
-- Multi-dimensional metrics, for example some [routing metrics](monitor-iot-hub-reference.md#routing-metrics), are currently exported as flattened single dimensional metrics aggregated across dimension values. For more detail, see [Exporting platform metrics to other locations](../azure-monitor/essentials/metrics-supported.md#exporting-platform-metrics-to-other-locations).
+- Multi-dimensional metrics, for example some [routing metrics](monitor-iot-hub-reference.md#routing-metrics), are currently exported as flattened single dimensional metrics aggregated across dimension values. For more information, see [Exporting platform metrics to other locations](../azure-monitor/essentials/metrics-supported.md#exporting-platform-metrics-to-other-locations).
## Analyzing metrics
-You can analyze metrics for Azure IoT Hub with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for Azure IoT Hub with metrics from other Azure services using metrics explorer. For more information on this tool, see [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md).
-In Azure portal, you can select **Metrics** under **Monitoring** on the left-pane of your IoT hub to open metrics explorer scoped, by default, to the platform metrics emitted by your IoT hub:
+To open metrics explorer, go to the Azure portal and open your IoT hub, then select **Metrics** under **Monitoring**. This explorer is scoped, by default, to the platform metrics emitted by your IoT hub.
For a list of the platform metrics collected for Azure IoT Hub, see [Metrics in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#metrics). For a list of the platform metrics collected for all Azure services, see [Supported metrics with Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
Data in Azure Monitor Logs is stored in tables where each table has its own set
To route data to Azure Monitor Logs, you must create a diagnostic setting to send resource logs or platform metrics to a Log Analytics workspace. To learn more, see [Collection and routing](#collection-and-routing).
-In Azure portal, you can select **Logs** under **Monitoring** on the left-pane of your IoT hub to perform Log Analytics queries scoped, by default, to the logs and metrics collected in Azure Monitor Logs for your IoT hub.
+To perform Log Analytics, go to the Azure portal and open your IoT hub, then select **Logs** under **Monitoring**. These Log Analytics queries are scoped, by default, to the logs and metrics collected in Azure Monitor Logs for your IoT hub.
For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Azure Monitor Logs tables in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#azure-monitor-logs-tables).
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). You can find the schema and categories of resource logs collected for Azure IoT Hub in [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs). Be aware that events are emitted only for errors in some categories.
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). You can find the schema and categories of resource logs collected for Azure IoT Hub in [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs). Events are emitted only for errors in some categories.
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform login Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do more complex queries using Log Analytics.
-When routing IoT Hub platform metrics to Azure Monitor Logs, be aware that:
+When routing IoT Hub platform metrics to Azure Monitor Logs:
-- The following platform metrics are not exportable via diagnostic settings: *Connected devices (preview)* and *Total devices (preview)*.
+- The following platform metrics aren't exportable via diagnostic settings: *Connected devices* and *Total devices*.
- Multi-dimensional metrics, for example some [routing metrics](monitor-iot-hub-reference.md#routing-metrics), are currently exported as flattened single dimensional metrics aggregated across dimension values. For more detail, see [Exporting platform metrics to other locations](../azure-monitor/essentials/metrics-supported.md#exporting-platform-metrics-to-other-locations).
-For some common queries with IoT Hub, see [Sample Kusto queries](#sample-kusto-queries). For detailed information on using Log Analytics queries, see [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
+For common queries with IoT Hub, see [Sample Kusto queries](#sample-kusto-queries). For more information on using Log Analytics queries, see [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
### SDK version in IoT Hub logs
-Some operations in IoT Hub resource logs return an `sdkVersion` property in their `properties` object. For these operations, when a device or backend app is using one of the Azure IoT SDKs, this property contains information about the SDK being used, the SDK version, and the platform on which the SDK is running. The following example shows the `sdkVersion` property emitted for a [`deviceConnect`](monitor-iot-hub-reference.md#connections) operation when using the Node.js device SDK: `"azure-iot-device/1.17.1 (node v10.16.0; Windows_NT 10.0.18363; x64)"`. Here's an example of the value emitted for the .NET (C#) SDK: `".NET/1.21.2 (.NET Framework 4.8.4200.0; Microsoft Windows 10.0.17763 WindowsProduct:0x00000004; X86)"`.
+Some operations in IoT Hub resource logs return an `sdkVersion` property in their `properties` object. For these operations, when a device or backend app is using one of the Azure IoT SDKs, this property contains information about the SDK being used, the SDK version, and the platform on which the SDK is running.
+
+The following examples show the `sdkVersion` property emitted for a [`deviceConnect`](monitor-iot-hub-reference.md#connections) operation using:
+
+* The Node.js device SDK: `"azure-iot-device/1.17.1 (node v10.16.0; Windows_NT 10.0.18363; x64)"`
+* The .NET (C#) SDK: `".NET/1.21.2 (.NET Framework 4.8.4200.0; Microsoft Windows 10.0.17763 WindowsProduct:0x00000004; X86)"`.
The following table shows the SDK name used for different Azure IoT SDKs:
AzureDiagnostics
### Sample Kusto queries
-> [!IMPORTANT]
-> When you select **Logs** from the IoT hub menu, Log Analytics is opened with the query scope set to the current IoT hub. This means that log queries will only include data from that resource. If you want to run a query that includes data from other IoT hubs or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
+Use the following [Kusto](/azure/data-explorer/kusto/query/) queries to help you monitor your IoT hub.
-Following are queries that you can use to help you monitor your IoT hub.
+> [!IMPORTANT]
+> Selecting **Logs** from the **IoT Hub** menu opens **Log Analytics** and includes data solely from your IoT hub resource. For queries that include data from other IoT hubs or Azure services, select **Logs** from the [**Azure Monitor** menu](https://portal.azure.com/#view/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/~/logs). For more information, see [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md).
-- Connectivity Errors: Identify device connection errors.
+- **Connectivity Errors**: Identify device connection errors.
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
| where Category == "Connections" and Level == "Error" ``` -- Throttling Errors: Identify devices that made the most requests resulting in throttling errors.
+- **Throttling Errors**: Identify devices that made the most requests resulting in throttling errors.
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
| order by count_ desc ``` -- Dead Endpoints: Identify dead or unhealthy endpoints by the number times the issue was reported, as well as the reason why.
+- **Dead Endpoints**: Identify dead or unhealthy endpoints by the number of times the issue was reported and know the reason why.
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
| order by count_ desc ``` -- Error summary: Count of errors across all operations by type.
+- **Error summary**: Count of errors across all operations by type.
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
| summarize count() by ResultType, ResultDescription, Category, _ResourceId ``` -- Recently connected devices: List of devices that IoT Hub saw connect in the specified time period.
+- **Recently connected devices**: List of devices that IoT Hub saw connect in the specified time period.
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
| summarize max(TimeGenerated) by DeviceId, _ResourceId ``` -- Connection events for a specific device: All connection events logged for a specific device (*test-device*).
+- **Connection events for a specific device**: All connection events logged for a specific device (*test-device*).
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
| where DeviceId == "test-device" ``` -- SDK version of devices: List of devices and their SDK versions for device connections or device to cloud twin operations.
+- **SDK version of devices**: List of devices and their SDK versions for device connections or device to cloud twin operations.
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
### Read logs from Azure Event Hubs
-After you set up event logging through diagnostics settings, you can create applications that read out the logs so that you can take action based on the information in them. This sample code retrieves logs from an event hub:
+After you set up event logging through diagnostics settings, you can create applications that read out the logs so that you can take action based on the information in them. The following sample code retrieves logs from an event hub.
```csharp class Program
class Program
Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
-When creating an alert rule based on platform metrics, be aware that for IoT Hub platform metrics that are collected in units of count, some aggregations may not be available or usable. To learn more, see [Supported aggregations in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#supported-aggregations).
+When you create an alert rule based on platform metrics (collected in units of count), some aggregations may not be available or usable. For more information, see [Supported aggregations](monitor-iot-hub-reference.md#supported-aggregations) in **Monitoring Azure IoT Hub data reference**.
## Monitor per-device disconnects with Event Grid
-Azure Monitor provides a metric, *Connected devices*, that you can use to monitor the number of devices connected to your IoT Hub and trigger an alert when number of connected devices drops below a threshold value. Azure Monitor also emits events in the [connections category](monitor-iot-hub-reference.md#connections) that you can use to monitor device connects, disconnects, and connection errors. While these may be sufficient for some scenarios, [Azure Event Grid](../event-grid/index.yml) provides a low-latency, per-device monitoring solution that you can use to track device connections for critical devices and infrastructure.
+Azure Monitor provides a metric, *Connected devices*, that you can use to monitor the number of devices connected to your IoT Hub. This metric triggers an alert when the number of connected devices drops below a threshold value. Azure Monitor also emits events in the [connections category](monitor-iot-hub-reference.md#connections) that you can use to monitor device connects, disconnects, and connection errors. While these events may be sufficient for some scenarios, [Azure Event Grid](../event-grid/index.yml) provides a low-latency, per-device monitoring solution that you can use to track device connections for critical devices and infrastructure.
-With Event Grid, you can subscribe to the IoT Hub [**DeviceConnected** and **DeviceDisconnected** events](iot-hub-event-grid.md#event-types) to trigger alerts and monitor device connection state. Event Grid provides much lower event latency than Azure Monitor, and you can monitor on a per-device basis, rather than for the total number of connected devices. These factors make Event Grid the preferred method for monitoring connections for critical devices and infrastructure. We highly recommend using Event Grid to monitor device connections in production environments.
+With Event Grid, you can subscribe to the IoT Hub [**DeviceConnected** and **DeviceDisconnected** events](iot-hub-event-grid.md#event-types) to trigger alerts and monitor device connection state. Event Grid provides a much lower event latency than Azure Monitor, so you can monitor on a per-device basis rather than for all connected devices. These factors make Event Grid the preferred method for monitoring connections for critical devices and infrastructure. We highly recommend using Event Grid to monitor device connections in production environments.
-For more detailed information about monitoring device connectivity with Event Grid and Azure Monitor, see [Monitor, diagnose, and troubleshoot device connectivity to Azure IoT Hub](iot-hub-troubleshoot-connectivity.md).
+For more information about monitoring device connectivity with Event Grid and Azure Monitor, see [Monitor, diagnose, and troubleshoot device connectivity to Azure IoT Hub](iot-hub-troubleshoot-connectivity.md).
## Next steps -- See [Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md) for a reference of the metrics, logs, and other important values created by [service name].
+- [Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md): a reference of the metrics, logs, and other important values created by IoT Hub.
-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md): monitoring Azure resources.
-- See [Monitor, diagnose, and troubleshoot device connectivity to Azure IoT Hub](iot-hub-troubleshoot-connectivity.md) for details on monitoring device connectivity.
+- [Monitor, diagnose, and troubleshoot device connectivity to Azure IoT Hub](iot-hub-troubleshoot-connectivity.md): monitoring device connectivity.
key-vault Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/backup-restore.md
az keyvault backup start --hsm-name mhsmdemo2 --storage-account-name mhsmdemobac
Full restore allows you to completely restore the contents of the HSM with a previous backup, including all keys, versions, attributes, tags, and role assignments. Everything currently stored in the HSM will be wiped out, and it will return to the same state it was in when the source backup was created. > [!IMPORTANT]
-> Full restore is a very destructive and disruptive operation. Therefore it is mandatory to have completed a full backup within last 30 minutes before a `restore` operation can be performed.
+> Full restore is a very destructive and disruptive operation. Therefore it is mandatory to have completed a full backup at least 30 minutes prior to a `restore` operation can be performed.
Restore is a data plane operation. The caller starting the restore operation must have permission to perform dataAction **Microsoft.KeyVault/managedHsm/restore/start/action**. The source HSM where the backup was created and the destination HSM where the restore will be performed **must** have the same Security Domain. See more [about Managed HSM Security Domain](security-domain.md).
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-powershell.md
In this quickstart, you will create and activate an Azure Key Vault Managed HSM
If you do not have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+The service is available in limited regions ΓÇô To learn more about availability, please see [Azure Dedicated HSM purshase options](https://azure.microsoft.com/pricing/details/azure-dedicated-hsm).
+ [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)] If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
Login-AzAccount
## Create a resource group
-A resource group is a logical container into which Azure resources are deployed and managed. Use the Azure PowerShell [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a resource group named *myResourceGroup* in the *westus3* location.
+A resource group is a logical container into which Azure resources are deployed and managed. Use the Azure PowerShell [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a resource group named *myResourceGroup* in the *eastus* location.
```azurepowershell-interactive
-New-AzResourceGroup -Name "myResourceGroup" -Location "westus3"
+New-AzResourceGroup -Name "myResourceGroup" -Location "eastus"
``` ## Get your principal ID
Use the Azure PowerShell [New-AzKeyVaultManagedHsm](/powershell/module/az.keyvau
- Your principal ID: Pass the Azure Active Directory principal ID that you obtained in the last section to the "Administrator" parameter. ```azurepowershell-interactive
-New-AzKeyVaultManagedHsm -Name "<your-unique-managed-hsm-name>" -ResourceGroupName "myResourceGroup" -Location "westus3" -Administrator "<your-principal-ID>"
+New-AzKeyVaultManagedHsm -Name "<your-unique-managed-hsm-name>" -ResourceGroupName "myResourceGroup" -Location "eastus" -Administrator "<your-principal-ID>"
``` > [!NOTE] > The create command can take a few minutes. Once it returns successfully you are ready to activate your HSM.
lab-services Classroom Labs Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-scenarios.md
Last updated 01/04/2022
[!INCLUDE [preview note](./includes/lab-services-new-update-note.md)]
-Azure Labs Services allows educators (teachers, professors, trainers, or teaching assistants, etc.) to quickly and easily create an online lab to provision pre-configured learning environments for the trainees. Each trainee would be able use identical and isolated environments for the training. Policies can be applied to ensure that the training environments are available to each trainee only when they need them and contain enough resources - such as virtual machines - required for the training.
+Azure Lab Services allows educators (teachers, professors, trainers, or teaching assistants, etc.) to quickly and easily create an online lab to provision pre-configured learning environments for the trainees. Each trainee would be able use identical and isolated environments for the training. Policies can be applied to ensure that the training environments are available to each trainee only when they need them and contain enough resources - such as virtual machines - required for the training.
![Lab](./media/classroom-labs-scenarios/classroom.png)
load-testing How To Test Secured Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-secured-endpoints.md
+
+ Title: Load test secured endpoints
+description: Learn how to load test secured endpoints with Azure Load Testing. Use shared secrets, credentials, or client certificates for load testing applications that require authentication.
+++++ Last updated : 09/28/2022+++
+# Load test secured endpoints with Azure Load Testing Preview
+
+In this article, you learn how to load test applications with Azure Load Testing Preview that require authentication. Azure Load Testing enables you to [authenticate with endpoints by using shared secrets or credentials](#authenticate-with-a-shared-secret-or-credentials), or to [authenticate with client certificates](#authenticate-with-client-certificates).
+
+> [!IMPORTANT]
+> Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+## Authenticate with a shared secret or credentials
+
+In this scenario, the application endpoint requires that you use a shared secret, such as an access token, an API key, or user credentials to authenticate. In the JMeter script, you have to provide this security information with each application request. For example, to load test a web endpoint that uses OAuth 2.0, you add an `Authorization` header, which contains the access token, to the HTTP request.
+
+To avoid storing, and disclosing, security information in the JMeter script, Azure Load Testing enables you to securely store secrets in Azure Key Vault or in the CI/CD secrets store. By using a custom JMeter function `GetSecret`, you can retrieve the secret value and pass it to the application endpoint.
+
+The following diagram shows how to use shared secrets or credentials to authenticate with an application endpoint in your load test.
++
+1. Add the security information in a secrets store in either of two ways:
+
+ * Add the secret information in Azure Key Vault. Follow the steps in [Parameterize load tests with secrets](./how-to-parameterize-load-tests.md) to store a secret and authorize your load testing resource to read its value.
+
+ * Add the secret information as a secret in CI/CD ([GitHub Actions secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets) or [Azure Pipelines secret variables](/azure/devops/pipelines/process/set-secret-variables)).
+
+1. Add the secret to the load test configuration:
+
+ # [Azure portal](#tab/portal)
+
+ To add a secret to your load test in the Azure portal:
+
+ 1. Navigate to your load testing resource in the Azure portal. If you don't have a load test yet, [create a new load test using a JMeter script](./how-to-create-and-run-load-test-with-jmeter-script.md).
+ 1. On the left pane, select **Tests** to view the list of load tests.
+ 1. Select your test from the list, and then select **Edit**, to edit the load test configuration.
+
+ :::image type="content" source="./media/how-to-test-secured-endpoints/edit-load-test.png" alt-text="Screenshot that shows how to edit a load test in the Azure portal.":::
+
+ 1. On the **Parameters** tab, enter the details of the secret.
+
+ | Field | Value |
+ | -- | -- |
+ | **Name** | Name of the secret. You'll provide this name to the `GetSecret` function to retrieve the secret value in the JMeter script. |
+ | **Value** | Matches the Azure Key Vault **Secret identifier**. |
+
+ :::image type="content" source="media/how-to-test-secured-endpoints/load-test-secrets.png" alt-text="Screenshot that shows how to add secrets to a load test in the Azure portal.":::
+
+ 1. Select **Apply**, to save the load test configuration changes.
+
+ # [GitHub Actions](#tab/github)
+
+ To add a secret to your load test in GitHub Actions, update the GitHub Actions workflow YAML file. In the workflow, add a `secrets` parameter to the `azure/load-testing` action.
+
+ | Field | Value |
+ | -- | -- |
+ | **name** | Name of the secret. You'll provide this name to the `GetSecret` function to retrieve the secret value in the JMeter script. |
+ | **value** | References the GitHub Actions secret name. |
+
+ The following code snippet gives an example of how to configure a load test secret in GitHub Actions.
+
+ ```yaml
+ - name: 'Azure Load Testing'
+ uses: azure/load-testing@v1
+ with:
+ loadtestConfigFile: 'SampleApp.yaml'
+ loadtestResource: 'MyTest'
+ resourceGroup: 'loadtests-rg'
+ secrets: |
+ [
+ {
+ "name": "appToken",
+ "value": "${{ secrets.APP_TOKEN }}"
+ }
+ ]
+ ```
+
+ # [Azure Pipelines](#tab/pipelines)
+
+ To add a secret to your load test in Azure Pipelines, update the Azure Pipelines definition file. In the pipeline, add a `secrets` parameter to the `AzureLoadTest` task.
+
+ | Field | Value |
+ | -- | -- |
+ | **name** | Name of the secret. You'll provide this name to the `GetSecret` function to retrieve the secret value in the JMeter script. |
+ | **value** | References the Azure Pipelines secret variable name. |
+
+ The following code snippet gives an example of how to configure a load test secret in Azure Pipelines.
+
+ ```yaml
+ - task: AzureLoadTest@1
+ inputs:
+ azureSubscription: 'MyAzureLoadTestingRG'
+ loadTestConfigFile: 'SampleApp.yaml'
+ loadTestResource: 'MyTest'
+ resourceGroup: 'loadtests-rg'
+ secrets: |
+ [
+ {
+ "name": "appToken",
+ "value": "$(appToken)"
+ }
+ ]
+ ```
+
+
+1. Update the JMeter script to retrieve the secret value:
+
+ 1. Create a user-defined variable that retrieves the secret value with the `GetSecret` custom function:
+ <!-- Add screenshot -->
+
+ 1. Update the JMeter sampler component to pass the secret in the request. For example, to provide an OAuth2 access token, you configure the `Authentication` HTTP header:
+ <!-- Add screenshot -->
+
+When you now run your load test, the JMeter script can retrieve the secret information from the secrets store and authenticate with the application endpoint.
+
+## Authenticate with client certificates
+
+In this scenario, the application endpoint requires that you use a client certificate to authenticate. Azure Load Testing supports Public Key Certificate Standard #12 (PKCS12) type of certificates. You can use only one client certificate in a load test.
+
+To avoid storing, and disclosing, the client certificate alongside the JMeter script, Azure Load Testing uses Azure Key Vault to store the certificate. When you run the load test, Azure Load Testing passes the certificate to JMeter, which uses it to authenticate with the application endpoint. You don't have to update the JMeter script to use the client certificate.
+
+The following diagram shows how to use a client certificate to authenticate with an application endpoint in your load test.
++
+1. Follow the steps in [Import a certificate](/azure/key-vault/certificates/tutorial-import-certificate) to store your certificate in Azure Key Vault.
+
+ > [!IMPORTANT]
+ > Azure Load Testing only supports PKCS12 certificates. Upload the client certificate in PFX file format.
+
+1. Verify that your load testing resource has permissions to retrieve the certificate from your key vault.
+
+ Azure Load Testing retrieves the certificate as a secret to ensure that the private key for the certificate is available. [Assign the Get secret permission to your load testing resource](./how-to-use-a-managed-identity.md#grant-access-to-your-azure-key-vault) in Azure Key Vault.
+
+1. Add the certificate to the load test configuration:
+
+ # [Azure portal](#tab/portal)
+
+ To add a client certificate to your load test in the Azure portal:
+
+ 1. Navigate to your load testing resource in the Azure portal. If you don't have a load test yet, [create a new load test using a JMeter script](./how-to-create-and-run-load-test-with-jmeter-script.md).
+ 1. On the left pane, select **Tests** to view the list of load tests.
+ 1. Select your test from the list, and then select **Edit**, to edit the load test configuration.
+
+ :::image type="content" source="./media/how-to-test-secured-endpoints/edit-load-test.png" alt-text="Screenshot that shows how to edit a load test in the Azure portal.":::
+
+ 1. On the **Parameters** tab, enter the details of the certificate.
+
+ | Field | Value |
+ | -- | -- |
+ | **Name** | Name of the certificate. |
+ | **Value** | Matches the Azure Key Vault **Secret identifier** of the certificate. |
+
+ :::image type="content" source="media/how-to-test-secured-endpoints/load-test-certificates.png" alt-text="Screenshot that shows how to add a certificate to a load test in the Azure portal.":::
+
+ 1. Select **Apply**, to save the load test configuration changes.
+
+ # [GitHub Actions](#tab/github)
+
+ To add a client certificate for your load test, update the `certificates` property in the [load test YAML configuration file](./reference-test-config-yaml.md).
+
+ | Field | Value |
+ | -- | -- |
+ | **name** | Name of the client certificate. |
+ | **value** | Matches the Azure Key Vault **Secret identifier** of the certificate. |
+
+ ```yml
+ certificates:
+ - name: <my-certificate-name>
+ value: <my-keyvault-secret-ID>
+ ```
+
+ # [Azure Pipelines](#tab/pipelines)
+
+ To add a client certificate for your load test, update the `certificates` property in the [load test YAML configuration file](./reference-test-config-yaml.md).
+
+ | Field | Value |
+ | -- | -- |
+ | **name** | Name of the client certificate. |
+ | **value** | Matches the Azure Key Vault **Secret identifier** of the certificate. |
+
+ ```yml
+ certificates:
+ - name: <my-certificate-name>
+ value: <my-keyvault-secret-ID>
+ ```
+
+
+When you now run your load test, Azure Load Testing retrieves the client certificate from Azure Key Vault, and injects it in the JMeter web requests.
+
+## Next steps
+
+* Learn more about [how to parameterize a load test](./how-to-parameterize-load-tests.md).
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
A test configuration uses the following keys:
| `splitAllCSVs` | boolean | False | Split the input CSV files evenly across all test engine instances. For more information, see [Read a CSV file in load tests](./how-to-read-csv-data.md#split-csv-input-data-across-test-engines). | | `secrets` | object | | List of secrets that the Apache JMeter script references. | | `secrets.name` | string | | Name of the secret. This name should match the secret name that you use in the Apache JMeter script. |
-| `secrets.value` | string | | URI for the Azure Key Vault secret. |
+| `secrets.value` | string | | URI (secret identifier) for the Azure Key Vault secret. |
| `env` | object | | List of environment variables that the Apache JMeter script references. | | `env.name` | string | | Name of the environment variable. This name should match the secret name that you use in the Apache JMeter script. | | `env.value` | string | | Value of the environment variable. |
+| `certificates` | object | | List of client certificates for authenticating with application endpoints in the JMeter script. |
+| `certificates.name` | string | | Name of the certificate. |
+| `certificates.value` | string | | URI (secret identifier) for the certificate in Azure Key Vault. |
| `keyVaultReferenceIdentity` | string | | Resource ID of the user-assigned managed identity for accessing the secrets from your Azure Key Vault. If you use a system-managed identity, this information isn't needed. Make sure to grant this user-assigned identity access to your Azure key vault. | The following YAML snippet contains an example load test configuration:
env:
value: my-value secrets: - name: my-secret
- value: https://akv-contoso.vault.azure.net/secrets/MySecret
+ value: https://akv-contoso.vault.azure.net/secrets/MySecret/abc1234567890def12345
+certificates:
+ - name: my-certificate
+ value: https://akv-contoso.vault.azure.net/certificates/MyCertificate/abc1234567890def12345
keyVaultReferenceIdentity: /subscriptions/abcdef01-2345-6789-0abc-def012345678/resourceGroups/sample-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/sample-identity ``` ## Next steps
-Learn how to build [automated regression testing in your CI/CD workflow](tutorial-cicd-azure-pipelines.md).
+- Learn how to build [automated regression testing in your CI/CD workflow](tutorial-cicd-azure-pipelines.md).
+- Learn how to [parameterize load tests with secrets and environment variables](./how-to-parameterize-load-tests.md).
+- Learn how to [load test secured endpoints](./how-to-test-secured-endpoints.md).
logic-apps Export From Ise To Standard Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-ise-to-standard-logic-app.md
ms.suite: integration Previously updated : 09/14/2022 Last updated : 09/28/2022+ #Customer intent: As a developer, I want to export one or more ISE workflows to a Standard workflow.
This article provides information about the export process and shows how to expo
- An existing ISE with the logic app workflows that you want to export.
+- Azure contributor subscription-level access to the ISE, not just resource group-level access.
+ - To include and deploy managed connections in your workflows, you'll need an existing Azure resource group for deploying these connections. This option is recommended only for non-production environments. - Review and meet the requirements for [how to set up Visual Studio Code with the Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
The following table describes these new folders and files added by the export pr
## Next steps -- [Run, test, and debug locally](create-single-tenant-workflows-visual-studio-code.md#run-test-and-debug-locally)
+- [Run, test, and debug locally](create-single-tenant-workflows-visual-studio-code.md#run-test-and-debug-locally)
logic-apps Sample Logic Apps Cli Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/sample-logic-apps-cli-script.md
ms.suite: integration
Previously updated : 07/30/2020 Last updated : 08/23/2022 # Azure CLI script sample - create a logic app
logic-apps Support Non Unicode Character Encoding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/support-non-unicode-character-encoding.md
Last updated 08/20/2022
# Support non-Unicode character encoding in Azure Logic Apps + When you work with text payloads, Azure Logic Apps infers the text is encoded in a Unicode format, such as UTF-8. You might have problems receiving, sending, or processing characters with different encodings in your workflow. For example, you might get corrupted characters in flat files when working with legacy systems that don't support Unicode. To work with text that has other character encoding, apply base64 encoding to the non-Unicode payload. This step prevents Logic Apps from assuming the text is in UTF-8 format. You can then convert any .NET-supported encoding to UTF-8 using Azure Functions.
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
Last updated 09/20/2022
# Reference guide to workflow expression functions in Azure Logic Apps and Power Automate + For workflow definitions in [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Power Automate](/power-automate/getting-started), some [expressions](logic-apps-workflow-definition-language.md#expressions) get their values from runtime actions that might not yet exist when your workflow starts running. To reference or process the values in these expressions, you can use *expression functions* provided by the [Workflow Definition Language](logic-apps-workflow-definition-language.md). > [!NOTE]
machine-learning Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-customer-managed-keys.md
Azure Machine Learning is built on top of multiple Azure services. While the dat
[!INCLUDE [machine-learning-customer-managed-keys.md](../../includes/machine-learning-customer-managed-keys.md)]
-In addition to customer-managed keys, Azure Machine Learning also provides a [hbi_workspace flag](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-). Enabling this flag reduces the amount of data Microsoft collects for diagnostic purposes and enables [extra encryption in Microsoft-managed environments](../security/fundamentals/encryption-atrest.md). This flag also enables the following behaviors:
+In addition to customer-managed keys, Azure Machine Learning also provides a [hbi_workspace flag](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace). Enabling this flag reduces the amount of data Microsoft collects for diagnostic purposes and enables [extra encryption in Microsoft-managed environments](../security/fundamentals/encryption-atrest.md). This flag also enables the following behaviors:
* Starts encrypting the local scratch disk in your Azure Machine Learning compute cluster, provided you havenΓÇÖt created any previous clusters in that subscription. Else, you need to raise a support ticket to enable encryption of the scratch disk of your compute clusters. * Cleans up your local scratch disk between jobs.
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md
For an example of creating a workspace using an existing Azure Container Registr
You may encrypt a deployed Azure Container Instance (ACI) resource using customer-managed keys. The customer-managed key used for ACI can be stored in the Azure Key Vault for your workspace. For information on generating a key, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md#generate-a-new-key). + To use the key when deploying a model to Azure Container Instance, create a new deployment configuration using `AciWebservice.deploy_configuration()`. Provide the key information using the following parameters: * `cmk_vault_base_url`: The URL of the key vault that contains the key.
machine-learning Concept Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-environments.md
For more information on the base images, see the following links:
## Next steps * Learn how to [create and use environments](how-to-use-environments.md) in Azure Machine Learning.
-* See the Python SDK reference documentation for the [environment class](/python/api/azureml-core/azureml.core.environment%28class%29).
+* See the Python SDK reference documentation for the [environment class](/python/api/azure-ai-ml/azure.ai.ml.entities.environment).
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Microsoft Power BI supports using machine learning models for data analytics. Fo
Machine Learning gives you the capability to track the end-to-end audit trail of all your machine learning assets by using metadata. For example: -- Machine Learning [integrates with Git](concept-train-model-git-integration.md) to track information on which repository, branch, and commit your code came from. - [Machine Learning datasets](how-to-create-register-datasets.md) help you track, profile, and version data. - [Interpretability](how-to-machine-learning-interpretability.md) allows you to explain your models, meet regulatory compliance, and understand how models arrive at a result for specific input. - Machine Learning Job history stores a snapshot of the code, data, and computes used to train a model.
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
These rule collections are described in more detail in [What are some Azure Fire
> [!TIP] > * AzureContainerRegistry.region is only needed for custom Docker images. Including small modifications (such as additional packages) to base images provided by Microsoft. > * MicrosoftContainerRegistry.region is only needed if you plan on using the _default Docker images provided by Microsoft_, and _enabling user-managed dependencies_.
- > * AzureKeyVault.region is only needed if your workspace was created with the [hbi_workspace](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-) flag enabled.
+ > * AzureKeyVault.region is only needed if your workspace was created with the [hbi_workspace](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace) flag enabled.
> * For entries that contain `region`, replace with the Azure region that you're using. For example, `AzureContainerRegistry.westus`. 1. Add __Application rules__ for the following hosts:
The hosts in the following tables are owned by Microsoft, and provide services r
**Azure Machine Learning compute instance and compute cluster hosts** > [!TIP]
-> * The host for __Azure Key Vault__ is only needed if your workspace was created with the [hbi_workspace](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-) flag enabled.
+> * The host for __Azure Key Vault__ is only needed if your workspace was created with the [hbi_workspace](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace) flag enabled.
> * Ports 8787 and 18881 for __compute instance__ are only needed when your Azure Machine workspace has a private endpoint. > * In the following table, replace `<storage>` with the name of the default storage account for your Azure Machine Learning workspace. > * Websocket communication must be allowed to the compute instance. If you block websocket traffic, Jupyter notebooks won't work correctly.
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-terminal.md
Access all Git operations from the terminal. All Git files and folders will be s
> [!NOTE] > Add your files and folders anywhere under the **~/cloudfiles/code/Users** folder so they will be visible in all your Jupyter environments.
-Learn more about [cloning Git repositories into your workspace file system](concept-train-model-git-integration.md#clone-git-repositories-into-your-workspace-file-system).
- ## Install packages Install packages from a terminal window. Install Python packages into the **Python 3.8 - AzureML** environment. Install R packages into the **R** environment.
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
In this article, you learn how to manage access (authorization) to an Azure Mach
## Default roles
-Azure Machine Learning workspaces have a four built-in roles that are available by default. When adding users to a workspace, they can be assigned one of the built-in roles described below.
+Azure Machine Learning workspaces have a five built-in roles that are available by default. When adding users to a workspace, they can be assigned one of the built-in roles described below.
| Role | Access level | | | | | **AzureML Data Scientist** | Can perform all actions within an Azure Machine Learning workspace, except for creating or deleting compute resources and modifying the workspace itself. |
+| **AzureML Compute Operator** | Can create, manage and access compute resources within a workspace.|
| **Reader** | Read-only actions in the workspace. Readers can list and view assets, including [datastore](how-to-access-data.md) credentials, in a workspace. Readers can't create or update these assets. | | **Contributor** | View, create, edit, or delete (where applicable) assets in a workspace. For example, contributors can create an experiment, create or attach a compute cluster, submit a run, and deploy a web service. | | **Owner** | Full access to the workspace, including the ability to view, create, edit, or delete (where applicable) assets in a workspace. Additionally, you can change role assignments. |
+You can combine the roles to grant different levels of access. For example, you can grant a workspace user both **AzureML Data Scientist** and **Azure ML Compute Operator** roles to permit the user to perform experiments while creating computes in a self-service manner.
+ > [!IMPORTANT] > Role access can be scoped to multiple levels in Azure. For example, someone with owner access to a workspace may not have owner access to the resource group that contains the workspace. For more information, see [How Azure RBAC works](../role-based-access-control/overview.md#how-azure-rbac-works).
You can use Azure AD security groups to manage access to workspaces. This approa
To use Azure AD security groups: 1. [Create a security group](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
- 2. [Add a group owner](../active-directory/fundamentals/active-directory-accessmanagement-managing-group-owners.md). This user has permissions to add or remove group members. Note that the group owner is not required to be group member, or have direct RBAC role on the workspace.
+ 2. [Add a group owner](../active-directory/fundamentals/active-directory-accessmanagement-managing-group-owners.md). This user has permissions to add or remove group members. Note that the group owner isn't required to be group member, or have direct RBAC role on the workspace.
3. Assign the group an RBAC role on the workspace, such as AzureML Data Scientist, Reader or Contributor. 4. [Add group members](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md). The members consequently gain access to the workspace.
You need to have permissions on the entire scope of your new role definition. Fo
## Use Azure Resource Manager templates for repeatability
-If you anticipate that you will need to recreate complex role assignments, an Azure Resource Manager template can be a big help. The [machine-learning-dependencies-role-assignment template](https://github.com/Azure/azure-quickstart-templates/tree/master//quickstarts/microsoft.machinelearningservices/machine-learning-dependencies-role-assignment) shows how role assignments can be specified in source code for reuse.
+If you anticipate that you'll need to recreate complex role assignments, an Azure Resource Manager template can be a significant help. The [machine-learning-dependencies-role-assignment template](https://github.com/Azure/azure-quickstart-templates/tree/master//quickstarts/microsoft.machinelearningservices/machine-learning-dependencies-role-assignment) shows how role assignments can be specified in source code for reuse.
## Common scenarios
A vendor quality assurance role can perform a customer quality assurance role, b
Here are a few things to be aware of while you use Azure role-based access control (Azure RBAC): -- When you create a resource in Azure, such as a workspace, you are not directly the owner of the resource. Your role is inherited from the highest scope role that you are authorized against in that subscription. As an example if you are a Network Administrator, and have the permissions to create a Machine Learning workspace, you would be assigned the Network Administrator role against that workspace, and not the Owner role.
+- When you create a resource in Azure, such as a workspace, you're not directly the owner of the resource. Your role is inherited from the highest scope role that you're authorized against in that subscription. As an example if you're a Network Administrator, and have the permissions to create a Machine Learning workspace, you would be assigned the Network Administrator role against that workspace, and not the Owner role.
- To perform quota operations in a workspace, you need subscription level permissions. This means setting either subscription level quota or workspace level quota for your managed compute resources can only happen if you have write permissions at the subscription scope.
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
image_object_detection_job = automl.image_object_detection(
training_data=my_training_data_input, validation_data=my_validation_data_input, target_column_name="label",
- primary_metric="mean_average_precision",
+ primary_metric=ObjectDetectionPrimaryMetrics.MEAN_AVERAGE_PRECISION,
tags={"my_custom_tag": "My custom value"}, )
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
In this article, you'll learn about network isolation changes with our new v2 AP
## Prerequisites
-* The [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install) or [Azure CLI extension for machine learning v1](reference-azure-machine-learning-cli.md).
+* The [Azure Machine Learning Python SDK v1](/python/api/overview/azure/ml/install) or [Azure CLI extension for machine learning v1](reference-azure-machine-learning-cli.md).
> [!IMPORTANT] > The v1 extension (`azure-cli-ml`) version must be 1.41.0 or greater. Use the `az version` command to view version information.
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
In this article, learn how to:
* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
-* The [Azure CLI extension for Machine Learning service (v2)](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
+* The [Azure CLI extension for Machine Learning service (v2)](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ai-ml-readme), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
* If using the Python SDK, [set up your development environment with a workspace](how-to-configure-environment.md). Once your environment is set up, attach to the workspace in your Python script:
To create a persistent Azure Machine Learning Compute resource in Python, specif
[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/resources/compute/compute.ipynb?name=cluster_basic)]
-You can also configure several advanced properties when you create Azure Machine Learning Compute. The properties allow you to create a persistent cluster of fixed size, or within an existing Azure Virtual Network in your subscription. See the [AmlCompute class](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute) for details.
+You can also configure several advanced properties when you create Azure Machine Learning Compute. The properties allow you to create a persistent cluster of fixed size, or within an existing Azure Virtual Network in your subscription. See the [AmlCompute class](/python/api/azure-ai-ml/azure.ai.ml.entities.amlcompute) for details.
> [!WARNING] > When setting the `location` parameter, if it is a different region than your workspace or datastores you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
machine-learning How To Manage Environments In Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-in-studio.md
# Manage software environments in Azure Machine Learning studio
-In this article, learn how to create and manage Azure Machine Learning [environments](/python/api/azureml-core/azureml.core.environment.environment) in the Azure Machine Learning studio. Use the environments to track and reproduce your projects' software dependencies as they evolve.
+In this article, learn how to create and manage Azure Machine Learning [environments](/python/api/azure-ai-ml/azure.ai.ml.entities.environment) in the Azure Machine Learning studio. Use the environments to track and reproduce your projects' software dependencies as they evolve.
The examples in this article show how to:
machine-learning How To Monitor Tensorboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-tensorboard.md
How you launch TensorBoard with Azure Machine Learning experiments depends on th
* Azure Machine Learning compute instance - no downloads or installation necessary * Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository. * In the samples folder on the notebook server, find two completed and expanded notebooks by navigating to these directories:
- * **how-to-use-azureml > track-and-monitor-experiments > tensorboard > export-run-history-to-tensorboard > export-run-history-to-tensorboard.ipynb**
- * **how-to-use-azureml > track-and-monitor-experiments > tensorboard > tensorboard > tensorboard.ipynb**
+ * **v1 (`<version>`) > how-to-use-azureml > track-and-monitor-experiments > tensorboard > export-run-history-to-tensorboard > export-run-history-to-tensorboard.ipynb**
+ * **v1 (`<version>`) > how-to-use-azureml > track-and-monitor-experiments > tensorboard > tensorboard > tensorboard.ipynb**
* Your own Juptyer notebook server * [Install the Azure Machine Learning SDK](/python/api/overview/azure/ml/install) with the `tensorboard` extra * [Create an Azure Machine Learning workspace](quickstart-create-resources.md).
machine-learning Migrate To V2 Assets Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-data.md
Title: 'Migrate data management from SDK v1 to v2'
+ Title: 'Upgrade data management to SDK v2'
-description: Migrate data management from v1 to v2 of Azure Machine Learning SDK
+description: Upgrade data management from v1 to v2 of Azure Machine Learning SDK
-# Migrate data management from SDK v1 to v2
+# Upgrade data management to SDK v2
In V1, an AzureML dataset can either be a `Filedataset` or a `Tabulardataset`. In V2, an AzureML data asset can be a `uri_folder`, `uri_file` or `mltable`.
machine-learning Migrate To V2 Assets Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-model.md
Title: Migrate model management from SDK v1 to SDK v2
+ Title: Upgrade model management to SDK v2
-description: Migrate model management from v1 to v2 of Azure Machine Learning SDK
+description: Upgrade model management from v1 to v2 of Azure Machine Learning SDK
-# Migrate model management from SDK v1 to SDK v2
+# Upgrade model management to SDK v2
This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
machine-learning Migrate To V2 Command Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-command-job.md
Title: 'Migrate script run from SDK v1 to SDK v2'
+ Title: 'Upgrade script run to SDK v2'
-description: Migrate how to run a script from SDK v1 to SDK v2
+description: Upgrade how to run a script from SDK v1 to SDK v2
-# Migrate script run from SDK v1 to SDK v2
+# Upgrade script run to SDK v2
In SDK v2, "experiments" and "runs" are consolidated into jobs. A job has a type. Most jobs are command jobs that run a `command`, like `python main.py`. What runs in a job is agnostic to any programming language, so you can run `bash` scripts, invoke `python` interpreters, run a bunch of `curl` commands, or anything else.
-To migrate, you'll need to change your code for submitting jobs to SDK v2. What you run _within_ the job doesn't need to be migrated to SDK v2. However, it's recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more details, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md).
+To upgrade, you'll need to change your code for submitting jobs to SDK v2. What you run _within_ the job doesn't need to be migrated to SDK v2. However, it's recommended to remove any code specific to Azure ML from your model training scripts. This separation allows for an easier transition between local and cloud and is considered best practice for mature MLOps. In practice, this means removing `azureml.*` lines of code. Model logging and tracking code should be replaced with MLflow. For more details, see [how to use MLflow in v2](how-to-use-mlflow-cli-runs.md).
This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
machine-learning Migrate To V2 Deploy Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-deploy-endpoints.md
Title: Migrate endpoints from SDK v1 to SDK v2
+ Title: Upgrade deployment endpoints to SDK v2
-description: Migrate deployment endpoints from v1 to v2 of Azure Machine Learning SDK
+description: Upgrade deployment endpoints from v1 to v2 of Azure Machine Learning SDK
-# Migrate deployment endpoints from SDK v1 to SDK v2
+# Upgrade deployment endpoints to SDK v2
We newly introduced [online endpoints](concept-endpoints.md) and batch endpoints as v2 concepts. There are several deployment funnels such as managed online endpoints, [kubernetes online endpoints](how-to-attach-kubernetes-anywhere.md) (including AKS and Arch-enabled Kubernetes) in v2, and ACI and AKS webservices in v1. In this article, we'll focus on the comparison of deploying to ACI webservices (v1) and managed online endpoints (v2).
machine-learning Migrate To V2 Execution Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-automl.md
Title: Migrate AutoML from SDK v1 to SDK v2
+ Title: Upgrade AutoML to SDK v2
description: Migrate AutoML from v1 to v2 of Azure Machine Learning SDK
-# Migrate AutoML from SDK v1 to SDK v2
+# Upgrade AutoML to SDK v2
In SDK v2, "experiments" and "runs" are consolidated into jobs.
machine-learning Migrate To V2 Execution Hyperdrive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-hyperdrive.md
Title: 'Migrate from v1 to v2: '
+ Title: Upgrade hyperparameter tuning to SDK v2
-description: Migrate from v1 to v2 of Azure Machine Learning SDK
+description: Upgrade hyperparameter tuning from v1 to v2 of Azure Machine Learning SDK
-# Migrate hyperparameter tuning from SDK v1 to SDK v2
+# Upgrade hyperparameter tuning to SDK v2
In SDK v2, tuning hyperparameters are consolidated into jobs.
machine-learning Migrate To V2 Execution Parallel Run Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-parallel-run-step.md
Title: Migrate parallel run step from SDK v1 to SDK v2
+ Title: Upgrade parallel run step to SDK v2
-description: Migrate parallel run step from v1 to v2 of Azure Machine Learning SDK
+description: Upgrade parallel run step from v1 to v2 of Azure Machine Learning SDK
-# Migrate parallel run step from SDK v1 to SDK v2
+# Upgrade parallel run step to SDK v2
In SDK v2, "Parallel run step" is consolidated into job concept as `parallel job`. Parallel job keeps the same target to empower users to accelerate their job execution by distributing repeated tasks on powerful multi-nodes compute clusters. On top of parallel run step, v2 parallel job provides extra benefits:
In SDK v2, "Parallel run step" is consolidated into job concept as `parallel job
- Simplify input schema, which replaces `Dataset` as input by using v2 `data asset` concept. You can easily use your local files or blob directory URI as the inputs to parallel job. - More powerful features are under developed in v2 parallel job only. For example, resume the failed/canceled parallel job to continue process the failed or unprocessed mini-batches by reusing the successful result to save duplicate effort.
-To migrate your current sdk v1 parallel run step to v2, you'll need to
+To upgrade your current sdk v1 parallel run step to v2, you'll need to
- Use `parallel_run_function` to create parallel job by replacing `ParallelRunConfig` and `ParallelRunStep` in v1. - Migrate your v1 pipeline to v2. Then invoke your v2 parallel job as a step in your v2 pipeline. See [how to migrate pipeline from v1 to v2](migrate-to-v2-execution-pipeline.md) for the details about pipeline migration.
machine-learning Migrate To V2 Execution Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md
Title: Migrate pipelines from SDK v1 to SDK v2
+ Title: Upgrade pipelines to SDK v2
-description: Migrate pipelines from v1 to v2 of Azure Machine Learning SDK
+description: Upgrade pipelines from v1 to v2 of Azure Machine Learning SDK
-# Migrate pipelines from SDK v1 to SDK v2
+# Upgrade pipelines to SDK v2
In SDK v2, "pipelines" are consolidated into jobs.
machine-learning Migrate To V2 Local Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-local-runs.md
Title: Migrate local runs from SDK v1 to SDK v2
+ Title: Upgrade local runs to SDK v2
-description: Migrate local runs from v1 to v2 of Azure Machine Learning SDK
+description: Upgrade local runs from v1 to v2 of Azure Machine Learning SDK
-# Migrate local runs from SDK v1 to SDK v2
+# Upgrade local runs to SDK v2
Local runs are similar in both V1 and V2. Use the "local" string when setting the compute target in either version.
machine-learning Migrate To V2 Resource Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-compute.md
Title: 'Migrate compute management from SDK v1 to v2'
+ Title: 'Upgrade compute management to v2'
-description: Migrate compute management from v1 to v2 of Azure Machine Learning SDK
+description: Upgrade compute management from v1 to v2 of Azure Machine Learning SDK
-# Migrate compute management from SDK v1 to v2
+# Upgrade compute management to v2
The compute management functionally remains unchanged with the v2 development platform.
machine-learning Migrate To V2 Resource Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-datastore.md
Title: Migrate datastore management from SDK v1 to SDK v2
+ Title: Upgrade datastore management to SDK v2
-description: Migrate datastore management from v1 to v2 of Azure Machine Learning SDK
+description: Upgrade datastore management from v1 to v2 of Azure Machine Learning SDK
-# Migrate datastore management from SDK v1 to SDK v2
+# Upgrade datastore management to SDK v2
Azure Machine Learning Datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts. V2 Datastore concept remains mostly unchanged compared with V1. The difference is we won't support SQL-like data sources via AzureML Datastores. We'll support SQL-like data sources via AzureML data import&export functionalities.
machine-learning Migrate To V2 Resource Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-workspace.md
Title: Migrate workspace management from SDK v1 to SDK v2
+ Title: Upgrade workspace management to SDK v2
-description: Migrate workspace management from v1 to v2 of Azure Machine Learning SDK
+description: Upgrade workspace management from v1 to v2 of Azure Machine Learning SDK
-# Migrate workspace management from SDK v1 to SDK v2
+# Upgrade workspace management to SDK v2
The workspace functionally remains unchanged with the V2 development platform. However, there are network-related changes to be aware of. For details, see [Network Isolation Change with Our New API Platform on Azure Resource Manager](how-to-configure-network-isolation-with-v2.md?tabs=python)
machine-learning How To Use Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-reinforcement-learning.md
Run this code in either of these environments. We recommend you try Azure Machin
- Azure Machine Learning compute instance
- - Learn how to clone sample notebooks in [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md).
- - Clone the **how-to-use-azureml** folder instead of **tutorials**
+ - Learn how to clone sample notebooks in [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md).
+ - Clone the **v1 (`<version>`) > how-to-use-azureml** folder instead of **tutorials**
- Run the virtual network setup notebook located at `/how-to-use-azureml/reinforcement-learning/setup/devenv_setup.ipynb` to open network ports used for distributed reinforcement learning. - Run the sample notebook `/how-to-use-azureml/reinforcement-learning/atari-on-distributed-compute/pong_rllib.ipynb`
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-train-deploy-notebook.md
Learn how to take the following actions:
Azure Machine Learning includes a cloud notebook server in your workspace for an install-free and pre-configured experience. Use [your own environment](../how-to-configure-environment.md#local) if you prefer to have control over your environment, packages, and dependencies. - ## Clone a notebook folder You complete the following experiment setup and run steps in Azure Machine Learning studio. This consolidated interface includes machine learning tools to perform data science scenarios for data science practitioners of all skill levels.
You complete the following experiment setup and run steps in Azure Machine Learn
1. On the left, select **Notebooks**.
-1. Select the **Open terminal** tool to open a terminal window.
-
- :::image type="content" source="media/tutorial-train-deploy-notebook/open-terminal.png" alt-text="Screenshot: Open terminal from Notebooks section.":::
+1. At the top, select the **Samples** tab.
-1. On the top bar, select the compute instance you created during the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) to use if it's not already selected. Start the compute instance if it is stopped.
+1. Open the **v1 (`<version>`)** folder. The version number represents the current v1 release for the Python SDK.
-1. In the terminal window, clone the MachineLearningNotebooks repository:
+1. Select the **...** button at the right of the **tutorials** folder, and then select **Clone**.
- ```bash
- git clone --depth 1 https://github.com/Azure/MachineLearningNotebooks
- ```
+ :::image type="content" source="media/tutorial-train-deploy-notebook/clone-tutorials.png" alt-text="Screenshot that shows the Clone tutorials folder.":::
-1. If necessary, refresh the list of files with the **Refresh** tool to see the newly cloned folder under your user folder.
+1. A list of folders shows each user who accesses the workspace. Select your folder to clone the **tutorials** folder there.
## Open the cloned notebook
-1. Open the **MachineLearningNotebooks** folder that was cloned into your **Files** section.
+1. Open the **tutorials** folder that was cloned into your **User files** section.
-1. Select the **quickstart-azureml-in-10mins.ipynb** file from your **MachineLearningNotebooks/tutorials/compute-instance-quickstarts/quickstart-azureml-in-10mins** folder.
+1. Select the **quickstart-azureml-in-10mins.ipynb** file from your **tutorials/compute-instance-quickstarts/quickstart-azureml-in-10mins** folder.
:::image type="content" source="media/tutorial-train-deploy-notebook/expand-folder.png" alt-text="Screenshot shows the Open tutorials folder.":::
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-azure-ad-authentication.md
Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of con
1. Select your preferred authentication method for accessing the MySQL flexible server. By default, the authentication selected will be MySQL authentication only. Select Azure Active Directory authentication only or MySQL and Azure Active Directory authentication to enabled Azure AD authentication. 2. Select the user managed identity (UMI) with the following privileges: _User.Read.All, GroupMember.Read.All_ and _Application.Read.ALL_, which can be used to configure Azure AD authentication.
-3. Add Azure AD Admin. It can be Azure AD Users, Groups or security principles, which will have access to Azure Database for MySQL flexible server.
+3. Add Azure AD Admin. It can be Azure AD Users or Groups, which will have access to Azure Database for MySQL flexible server.
4. Create database users in your database mapped to Azure AD identities. 5. Connect to your database by retrieving a token for an Azure AD identity and logging in.
Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of con
## Architecture
-User-managed identities are required for Azure Active Directory authentication. When a User-Assigned Identity is linked to the flexible server, the Managed Identity Resource Provider (MSRP) issues a certificate internally to that identity, and when the managed identity is deleted, the corresponding service principal is automatically removed. The service then uses the managed identity to request access tokens for services that support Azure AD authentication. Only a User-assigned Managed Identity (UMI) is currently supported by Azure Database for MySQL-Flexible Server. For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) in Azure. Azure takes care of rolling the credentials that are used by the service instance.
+User-managed identities are required for Azure Active Directory authentication. When a User-Assigned Identity is linked to the flexible server, the Managed Identity Resource Provider (MSRP) issues a certificate internally to that identity, and when the managed identity is deleted, the corresponding service principal is automatically removed. The service then uses the managed identity to request access tokens for services that support Azure AD authentication. Only a User-assigned Managed Identity (UMI) is currently supported by Azure Database for MySQL-Flexible Server. For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) in Azure.
The following high-level diagram summarizes how authentication works using Azure AD authentication with Azure Database for MySQL. The arrows indicate communication pathways.
The following high-level diagram summarizes how authentication works using Azure
1. Your application can request a token from the Azure Instance Metadata Service identity endpoint. 2. Using the client ID and certificate, a call is made to Azure AD to request an access token.
-3. A JSON Web Token (JWT) access token is returned by Azure AD.
-4. Your application sends the access token on a call to Azure Database for MySQL flexible server.
+3. A JSON Web Token (JWT) access token is returned by Azure AD. Your application sends the access token on a call to Azure Database for MySQL flexible server.
+4. MySQL flexible server validates the token with Azure AD.
## Administrator structure
When using Azure AD authentication, there are two Administrator accounts for the
:::image type="content" source="media/concepts-azure-ad-authentication/azure-ad-admin-structure.jpg" alt-text="Diagram of Azure ad admin structure."::: Methods of authentication for accessing the MySQL flexible server include: -- MySQL Authentication only - Create a MySQL admin login and password to access your MySQL server with MySQL authentication. -- Only Azure AD authentication - Authenticate as an Azure AD admin using an existing Azure AD user or group; the server parameter **aad_auth_only** will be _enabled_. -- Authentication with MySQL and Azure AD - Authenticate using MySQL admin credentials or as an Azure AD admin using an existing Azure AD user or group; the server parameter **aad_auth_only** will be _disabled_.
+- MySQL Authentication only - This is the default option. This is the default option. Only native MySQL Authentication with a MySQL login and password will be used to access Azure Database for MySQL flexible server.
+- Only Azure AD authentication - MySQL Native authentication will be disabled, and users will be able to authenticate using only their Azure AD user and token. To enable this mode, the server parameter **aad_auth_only** will be _enabled_.
+- Authentication with MySQL and Azure AD - Both native MySQL authentication and Azure AD authentication are supported. To enable this mode, the server parameter **aad_auth_only** will be _disabled_.
## Permissions To allow the UMI to read from Microsoft Graph as the server identity, the following permissions are required. Alternatively, give the UMI the [Directory Readers](../../active-directory/roles/permissions-reference.md#directory-readers) role.
-These permissions should be granted before you provision a logical server or managed instance. After you grant the permissions to the UMI, they're enabled for all servers or instances that are created with the UMI assigned as a server identity.
- > [!IMPORTANT] > Only a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator) or [Privileged Role Administrator](/azure/active-directory/roles/permissions-reference#privileged-role-administrator) can grant these permissions.
These permissions should be granted before you provision a logical server or man
- [GroupMember.Read.All](/graph/permissions-reference#group-permissions): Allows access to Azure AD group information. - [Application.Read.ALL](/graph/permissions-reference#application-resource-permissions): Allows access to Azure AD service principal (application) information.
-To create a new Azure AD database user, you must connect as the Azure AD administrator. This is demonstrated in Configure and Login with Azure AD for Azure Database for MySQL.
+For guidance about how to grant and use the permissions, refer [Microsoft Graph permissions](/graph/permissions-reference)
+
+After you grant the permissions to the UMI, they're enabled for all servers or instances that are created with the UMI assigned as a server identity.
+
+To create a new Azure AD database user, you must connect as the Azure AD administrator.
Any Azure AD authentication is only possible if the Azure AD admin was created for Azure Database for MySQL Flexible server. If the Azure Active Directory admin was removed from the server, existing Azure Active Directory users created previously can no longer connect to the database using their Azure Active Directory credentials.
Please note that management operations, such as adding new users, are only suppo
## Next steps -- To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for MySQL, see [Set up Azure Active Directory authentication for Azure Database for MySQL flexible server](how-to-azure-ad.md)
+- To learn how to configure Azure AD with Azure Database for MySQL, see [Set up Azure Active Directory authentication for Azure Database for MySQL flexible server](how-to-azure-ad.md)
mysql Concepts Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-customer-managed-key.md
With data encryption with customer-managed keys for Azure Database for MySQL - Flexible Server Preview, you can bring your own key (BYOK) for data protection at rest and implement separation of duties for managing keys and data. With customer managed keys (CMKs), the customer is responsible for and in a full control of key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing operations on keys.
-Data encryption with CMKs is set at the server level. For a given server, a CMK, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault instance](../../key-vault/general/security-features.md). Key Vault is highly available and scalable secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). Key Vault doesn't allow direct access to a stored key, but instead provides encryption/decryption services using the key to the authorized entities. The key can be generated by the key vault, imported, or [transferred to the key vault from an on-premises HSM device](../../key-vault/keys/hsm-protected-keys.md).
- > [!Note] > In the Public Preview, we can't enable geo redundancy on a flexible server that has CMK enabled, nor can we enable geo redundancy on a flexible server that has CMK enabled.
-## Terminology and description
-
-**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key.
-
-**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK. The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption rest](../../security/fundamentals/encryption-atrest.md).
- ## Benefits Data encryption with customer-managed keys for Azure Database for MySQL Flexible server provides the following benefits:
Data encryption with customer-managed keys for Azure Database for MySQL Flexible
- Full control over the key-lifecycle, including rotation of the key to align with corporate policies - Central management and organization of keys in Azure Key Vault - Ability to implement separation of duties between security officers, and DBA and system administrators -
+-
## How does data encryption with a customer-managed key work? Managed identities in Azure Active Directory (Azure AD) provide Azure services an alternative to storing credentials in the code by provisioning an automatically assigned identity that can be used to authenticate to any service supporting Azure AD authentication, such as Azure Key Vault (AKV). Azure Database for MySQL Flexible server currently supports only User-assigned Managed Identity (UMI). For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) in Azure.
The UMI must have the following access to the key vault:
- **Wrap Key**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for MySQL Flexible server. - **Unwrap Key**: To be able to decrypt the DEK. Azure Database for MySQL Flexible server needs the decrypted DEK to encrypt/decrypt the data
+### Terminology and description
+
+**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key.
+
+**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK. The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption rest](../../security/fundamentals/encryption-atrest.md).
+
+### How it works
+
+Data encryption with CMKs is set at the server level. For a given server, a CMK, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault instance](../../key-vault/general/security-features.md). Key Vault is highly available and scalable secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). Key Vault doesn't allow direct access to a stored key, but instead provides encryption/decryption services using the key to the authorized entities. The key can be generated by the key vault, imported, or [transferred to the key vault from an on-premises HSM device](../../key-vault/keys/hsm-protected-keys.md).
+ When you configure a flexible server to use a CMK stored in the key vault, the server sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the flexible server will send the protected DEK to the key vault for decryption. :::image type="content" source="media/concepts-customer-managed-key/mysql-customer-managed-key.jpg" alt-text="Diagram of how data encryption with a customer-managed key work.":::
After logging is enabled, auditors can use Azure Monitor to review Key Vault aud
> [!Note] > Permission changes can take up to 10 minutes to impact the key vault. This includes revoking access permissions to the TDE protector in AKV, and users within this time frame may still have access permissions.
-**Requirements for configuring data encryption for Azure Database for MySQL Flexible server**
+## Requirements for configuring data encryption for Azure Database for MySQL Flexible server
Before you attempt to configure Key Vault, be sure to address the following requirements.
To avoid issues while setting up customer-managed data encryption during restore
- Initiate the restore or read replica creation process from the source Azure Database for MySQL Flexible server. - On the restored/replica server, revalidate the customer-managed key in the data encryption settings to ensure that the User managed identity is given _Get, List, Wrap key_ and _Unwrap key_ permissions to the key stored in Key Vault.
+> [!Note]
+> Using the same identity and key as on the source server is not mandatory when performing a restore.
+ ## Next steps - [Data encryption with Azure CLI (Preview)](how-to-data-encryption-cli.md) - [Data encryption with Azure portal (Preview)](how-to-data-encryption-portal.md)-- [Azure Key Vault instance](../../key-vault/general/security-features.md) - [Security in encryption rest](../../security/fundamentals/encryption-atrest.md)
+- [Active Directory authentication (Preview)](concepts-azure-ad-authentication.md)
mysql Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-workbench.md
To connect to Azure Database for MySQL Flexible Server using MySQL Workbench:
| Hostname | *server name* | Specify the server name value that was used when you created the Azure Database for MySQL earlier. Our example server shown is mydemoserver.mysql.database.azure.com. Use the fully qualified domain name (\*.mysql.database.azure.com) as shown in the example. Follow the steps in the previous section to get the connection information if you do not remember your server name. | | Port | 3306 | Always use port 3306 when connecting to Azure Database for MySQL. | | Username | *server admin login name* | Type in the server admin login username supplied when you created the Azure Database for MySQL earlier. Our example username is myadmin. Follow the steps in the previous section to get the connection information if you do not remember the username.
- | Password | your password | Click **Store in Vault...** button to save the password. |
+ | Password | your password | Select **Store in Vault...** button to save the password. |
-3. Click **Test Connection** to test if all parameters are correctly configured.
+3. Select **Test Connection** to test if all parameters are correctly configured.
-4. Click **OK** to save the connection.
+4. Select **OK** to save the connection.
-5. In the listing of **MySQL Connections**, click the tile corresponding to your server, and then wait for the connection to be established.
+5. In the listing of **MySQL Connections**, select the tile corresponding to your server, and then wait for the connection to be established.
A new SQL tab opens with a blank editor where you can type your queries.
To connect to Azure Database for MySQL Flexible Server using MySQL Workbench:
:::image type="content" source="./media/connect-workbench/3-workbench-sql-tab.png" alt-text="MySQL Workbench SQL Tab to run sample SQL code":::
-2. To run the sample SQL Code, click the lightening bolt icon in the toolbar of the **SQL File** tab.
+2. To run the sample SQL Code, select the lightening bolt icon in the toolbar of the **SQL File** tab.
3. Notice the three tabbed results in the **Result Grid** section in the middle of the page. 4. Notice the **Output** list at the bottom of the page. The status of each command is shown.
mysql How To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-azure-ad.md
In this tutorial, you learn how to:
## Configure the Azure AD Admin
-Only an Azure AD Admin user can create/enable users for Azure AD-based authentication. To create an Azure AD Admin user, please follow the following steps.
+To create an Azure AD Admin user, please follow the following steps.
- In the Azure portal, select the instance of Azure Database for MySQL Flexible server that you want to enable for Azure AD. -- Under Security pane, select Authentication:
+- Under Security pane, select **Authentication**:
:::image type="content" source="media//how-to-azure-ad/azure-ad-configuration.jpg" alt-text="Diagram of how to configure Azure ad authentication."::: - There are three types of authentication available:
- - MySQL authentication only ΓÇô By default, MySQL uses the built-in mysql_native_password authentication plugin, which performs authentication using the native password hashing method
+ - **MySQL authentication only** ΓÇô By default, MySQL uses the built-in mysql_native_password authentication plugin, which performs authentication using the native password hashing method
- - Azure Active Directory authentication only ΓÇô Only allows authentication with an Azure AD account. Disables mysql_native_password authentication and turns _ON_ the server parameter **aad_auth_only**
+ - **Azure Active Directory authentication only** ΓÇô Only allows authentication with an Azure AD account. Disables mysql_native_password authentication and turns _ON_ the server parameter aad_auth_only
- - MySQL and Azure Active Directory authentication ΓÇô Allows authentication using a native MySQL password or an Azure AD account. Turns _OFF_ the server parameter **aad_auth_only**
+ - **MySQL and Azure Active Directory authentication** ΓÇô Allows authentication using a native MySQL password or an Azure AD account. Turns _OFF_ the server parameter aad_auth_only
-- Select Identity ΓÇô Select/Add User assigned managed identity. To allow the UMI to read from Microsoft Graph as the server identity, the following permissions are required. Alternatively, give the UMI the [Directory Readers](../../active-directory/roles/permissions-reference.md#directory-readers) role.
+ > [!NOTE]
+ > The server parameter aad_auth_only stays set to ON when the authentication type is changed to Azure Active Directory authentication only. We recommend disabling it manually when you opt for MySQL authentication only in the future.
+
+- **Select Identity** ΓÇô Select/Add User assigned managed identity. To allow the UMI to read from Microsoft Graph as the server identity, the following permissions are required. Alternatively, give the UMI the [Directory Readers](../../active-directory/roles/permissions-reference.md#directory-readers) role.
- [User.Read.All](/graph/permissions-reference#user-permissions): Allows access to Azure AD user information. - [GroupMember.Read.All](/graph/permissions-reference#group-permissions): Allows access to Azure AD group information. - [Application.Read.ALL](/graph/permissions-reference#application-resource-permissions): Allows access to Azure AD service principal (application) information.
-These permissions should be granted before you provision a logical server or managed instance. After you grant the permissions to the UMI, they're enabled for all servers or instances that are created with the UMI assigned as a server identity.
+For guidance about how to grant and use the permissions, refer [Microsoft Graph permissions](/graph/permissions-reference)
+
+After you grant the permissions to the UMI, they're enabled for all servers or instances that are created with the UMI assigned as a server identity.
> [!IMPORTANT] > Only a [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator) or [Privileged Role Administrator](/azure/active-directory/roles/permissions-reference#privileged-role-administrator) can grant these permissions. -- Select a valid Azure AD user or an Azure AD group in the customer tenant to be Azure AD administrator. Once Azure AD authentication support has been enabled, Azure AD Admins can be added as security principals with permissions to add Azure AD Users to the MySQL server.
+- Select a valid Azure AD user or an Azure AD group in the customer tenant to be **Azure AD administrator**. Once Azure AD authentication support has been enabled, Azure AD Admins can be added as security principals with permissions to add Azure AD Users to the MySQL server.
> [!NOTE] > Only one Azure AD admin can be created per MySQL server and selection of another one will overwrite the existing Azure AD admin configured for the server.
The access token validity is anywhere between 5 minutes to 60 minutes. We recomm
When connecting you need to use the access token as the MySQL user password. When using GUI clients such as MySQLWorkbench, you can use the method described above to retrieve the token.
+> [!NOTE]
+> The newly restored server will also have the server parameter aad_auth_only set to ON if it was ON on the source server during failover. If you wish to use MySQL authentication on the restored server, you must manually disable this server parameter. Otherwise, an Azure AD Admin must be configured.
+ #### Using MySQL CLI When using the CLI, you can use this short-hand to connect:
mysql -h mydb.mysql.database.azure.com \
--user user@tenant.onmicrosoft.com \ --enable-cleartext-plugin \ --password=`az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken`
-```
-
+```
#### Using MySQL Workbench * Launch MySQL Workbench and Click the Database option, then click "Connect to database" * In the hostname field, enter the MySQL FQDN eg. mysql.database.azure.com
-* In the username field, enter the MySQL Azure Active Directory administrator name and append this with MySQL server name, not the FQDN e.g. user@tenant.onmicrosoft.com@
+* In the username field, enter the MySQL Azure Active Directory administrator name and append this with MySQL server name, not the FQDN e.g. user@tenant.onmicrosoft.com
* In the password field, click "Store in Vault" and paste in the access token from file e.g. C:\temp\MySQLAccessToken.txt * Click the advanced tab and ensure that you check "Enable Cleartext Authentication Plugin" * Click OK to connect to the database
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
The read replica feature allows you to replicate data from an Azure Database for MySQL flexible server to a read-only server. You can replicate the source server to up to 10 replicas. This functionality is now extended to support HA enabled servers within same region.[Learn more](concepts-read-replicas.md) -- - **Azure Active Directory authentication for Azure Database for MySQL ΓÇô Flexible Server (Public Preview)** You can now authenticate to Azure Database for MySQL - Flexible server using Microsoft Azure Active Directory (Azure AD) using identities. With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management. [Learn More](concepts-azure-ad-authentication.md)
+- **Known issues**
+ - The server parameter aad_auth_only stays set to ON when the authentication type is changed to Azure Active Directory authentication only. We recommend disabling it manually when you opt for MySQL authentication only in the future.
+
+ - The newly restored server will also have the server parameter aad_auth_only set to ON if it was ON on the source server during failover. If you wish to use MySQL authentication on the restored server, you must manually disable this server parameter. Otherwise, an Azure AD Admin must be configured.
- **Customer managed keys data encryption ΓÇô Azure Database for MySQL ΓÇô Flexible Server (Preview)**
This article summarizes new releases and features in Azure Database for MySQL -
- **Change Timezone of your Azure Database for MySQL - Flexible Server in a single step** Previously to change time_zone of your Azure Database for MySQL - Flexible Server required two steps to take effect. Now you no longer need to call the procedure mysql.az_load_timezone() to populate the mysql.time_zone_name table. Flexible Server timezone can be changed directly by just changing the server parameter time_zone from [portal](./how-to-configure-server-parameters-portal.md#working-with-the-time-zone-parameter) or [CLI](./how-to-configure-server-parameters-cli.md#working-with-the-time-zone-parameter). +
+- **Known issues**
+
+ - The server parameter aad_auth_only stays set to ON when the authentication type is changed to Azure Active Directory authentication only. We recommend disabling it manually when you opt for MySQL authentication only in the future.
+
+ - The newly restored server will also have the server parameter aad_auth_only set to ON if it was ON on the source server during failover. If you wish to use MySQL authentication on the restored server, you must manually disable this server parameter. Otherwise, an Azure AD Admin must be configured.
+ ## August 2022 - **Server logs for Azure Database for MySQL - Flexible Server**
mysql Concepts Migrate Mydumper Myloader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/concepts-migrate-mydumper-myloader.md
+
+ Title: Migrate large databases to Azure Database for MySQL using mydumper/myloader
+description: This article explains two common ways to back up and restore databases in your Azure Database for MySQL, using tool mydumper/myloader
+++++ Last updated : 06/20/2022++
+# Migrate large databases to Azure Database for MySQL using mydumper/myloader
++
+Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. To migrate MySQL databases larger than 1 TB to Azure Database for MySQL, consider using community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html), which provide the following benefits:
+
+* Parallelism, to help reduce the migration time.
+* Better performance, by avoiding expensive character set conversion routines.
+* An output format, with separate files for tables, metadata etc., that makes it easy to view/parse data. Consistency, by maintaining snapshot across all threads.
+* Accurate primary and replica log positions.
+* Easy management, as they support Perl Compatible Regular Expressions (PCRE) for specifying database and tables inclusions and exclusions.
+* Schema and data goes together. Don't need to handle it separately like other logical migration tools.
+
+This quickstart shows you how to install, back up, and restore a MySQL database by using mydumper/myloader.
+
+## Prerequisites
+
+Before you begin migrating your MySQL database, you need to:
+
+1. Create an Azure Database for MySQL server by using the [Azure portal](../flexible-server/quickstart-create-server-portal.md).
+
+2. Create an Azure VM running Linux by using the [Azure portal](../../virtual-machines/linux/quick-create-portal.md) (preferably Ubuntu).
+ > [!Note]
+ > Prior to installing the tools, consider the following points:
+ >
+ > * If your source is on-premises and has a high bandwidth connection to Azure (using ExpressRoute), consider installing the tool on an Azure VM.<br>
+ > * If you have a challenge in the bandwidth between the source and target, consider installing mydumper near the source and myloader near the target server. You can use tools **[Azcopy](../../storage/common/storage-use-azcopy-v10.md)** to move the data from on-premises or other cloud solutions to Azure.
+
+3. Install mysql client, do the following steps:
+
+4.
+
+ * Update the package index on the Azure VM running Linux by running the following command:
+ ```bash
+ $ sudo apt update
+ ```
+ * Install the mysql client package by running the following command:
+ ```bash
+ $ sudo apt install mysql-client
+ ```
+
+## Install mydumper/myloader
+
+To install mydumper/myloader, do the following steps.
+
+1. Depending on your OS distribution, download the appropriate package for mydumper/myloader, running the following command:
+2.
+ ```bash
+ $ wget https://github.com/maxbube/mydumper/releases/download/v0.10.1/mydumper_0.10.1-2.$(lsb_release -cs)_amd64.deb
+ ```
+
+ > [!Note]
+ > $(lsb_release -cs) helps to identify your distribution.
+
+3. To install the .deb package for mydumper, run the following command:
+
+ ```bash
+ $ dpkg -i mydumper_0.10.1-2.$(lsb_release -cs)_amd64.deb
+ ```
+
+ > [!Tip]
+ > The command you use to install the package will differ based on the Linux distribution you have as the installers are different. The mydumper/myloader is available for following distributions Fedora, RedHat , Ubuntu, Debian, CentOS , openSUSE and MacOSX. For more information, see **[How to install mydumper](https://github.com/maxbube/mydumper#how-to-install-mydumpermyloader)**
+
+## Create a backup using mydumper
+
+* To create a backup using mydumper, run the following command:
+
+ ```bash
+ $ mydumper --host=<servername> --user=<username> --password=<Password> --outputdir=./backup --rows=100000 --compress --build-empty-files --threads=16 --compress-protocol --trx-consistency-only --ssl --regex '^(<Db_name>\.)' -L mydumper-logs.txt
+ ```
+
+This command uses the following variables:
+
+* **--host:** The host to connect to
+* **--user:** Username with the necessary privileges
+* **--password:** User password
+* **--rows:** Try to split tables into chunks of this many rows
+* **--outputdir:** Directory to dump output files to
+* **--regex:** Regular expression for Database matching.
+* **--trx-consistency-only:** Transactional consistency only
+* **--threads:** Number of threads to use, default 4. Recommended a use a value equal to 2x of the vCore of the computer.
+
+ >[!Note]
+ >For more information on other options, you can use with mydumper, run the following command:
+ **mydumper --help** . For more details see, [mydumper\myloader documentation](https://centminmod.com/mydumper.html)<br>
+ >To dump multiple databases in parallel, you can modify regex variable as shown in the example: **regex ΓÇÖ^(DbName1\.|DbName2\.)**
+
+## Restore your database using myloader
+
+* To restore the database that you backed up using mydumper, run the following command:
+
+ ```bash
+ $ myloader --host=<servername> --user=<username> --password=<Password> --directory=./backup --queries-per-transaction=500 --threads=16 --compress-protocol --ssl --verbose=3 -e 2>myloader-logs.txt
+ ```
+
+This command uses the following variables:
+
+* **--host:** The host to connect to
+* **--user:** Username with the necessary privileges
+* **--password:** User password
+* **--directory:** Location where the backup is stored.
+* **--queries-per-transaction:** Recommend setting to value not more than 500
+* **--threads:** Number of threads to use, default 4. Recommended a use a value equal to 2x of the vCore of the computer
+
+> [!Tip]
+> For more information on other options you can use with myloader, run the following command:
+**myloader --help**
+
+After the database is restored, itΓÇÖs always recommended to validate the data consistency between the source and the target databases.
+
+> [!Note]
+> Submit any issues or feedback regarding the mydumper/myloader tools **[here](https://github.com/maxbube/mydumper/issues)**.
+
+## Next steps
+
+* Learn more about the [mydumper/myloader project in GitHub](https://github.com/maxbube/mydumper).
+* Learn [how to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
+* [Minimal Downtime Migration of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server](../migrate/how-to-migrate-single-flexible-minimum-downtime.md)
+* Learn more about Data-in replication [Replicate data into Azure Database for MySQL Flexible Server](../flexible-server/concepts-data-in-replication.md) and [Configure Azure Database for MySQL Flexible Server Data-in replication](../flexible-server/how-to-data-in-replication.md)
+* Commonly encountered [migration errors](../single-server/how-to-troubleshoot-common-errors.md)
mysql How To Decide On Right Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/how-to-decide-on-right-migration-tools.md
+
+ Title: Select the right tools for migration to Azure Database for MySQL
+description: "This article provides a decision table, which helps customers in picking the right tools for migrating into Azure Database for MySQL"
+++ Last updated : 09/06/2022+++++
+# Select the right tools for migration to Azure Database for MySQL
++
+Migrations are multi-step projects that can be tough to complete. Migrating database servers across platforms involves more than data and schema migration. There are also several other components, such as server configuration parameters, networking, access control rules, etc., to move. These are required to ensure that the functionality of the database server in the new target platform mimics the source.
+
+For detailed information and use cases about migrating databases to Azure Database for MySQL, refer to the [Database Migration Guide](../migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro.md). This document provides pointers to help you successfully plan and execute a MySQL migration to Azure.
+
+In general, migrations can be categorized as either offline or online.
+
+- With an offline migration, the source server is taken offline, and a dump and restores of the databases are performed on the target server.
+
+- With an online migration (migration with minimal downtime), the source server allows updates, and the migration solution will take care of replicating the ongoing changes between the source and target server along with the initial dump and restore on the target.
+
+If your application can afford some downtime, offline migrations are always the preferred choice, as they're simple and easy to execute. However, an online migration is the best choice if your application can only afford minimal downtime. Migrations of most OLTP systems, such as payment processing and e-commerce, fall into this category.
+
+## Decision table
+
+There are both offline and online migration scenarios to help you select the right tools for migrating to Azure Database for MySQL - Flexible Server.
+
+### Offline
+
+To help you select the right tools for migrating to Azure Database for MySQL, consider the detail in the following table for offline migrations.
+
+| Migration Scenario | Tool(s) | Details | More information |
+|--||||
+| Single to Flexible Server (Azure portal) | Database Migration Service (DMS) and the Azure portal | [Tutorial: DMS with the Azure portal (offline)](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) | Recommended |
+| Single to Flexible Server (Azure CLI) | [Custom shell script](https://github.com/Azure/azure-mysql/tree/master/azuremysqltomysqlmigrate) | [Migrate from Azure Database for MySQL - Single Server to Flexible Server in five easy steps!](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/migrate-from-azure-database-for-mysql-single-server-to-flexible/ba-p/2674057) | The [script](https://github.com/Azure/azure-mysql/tree/master/azuremysqltomysqlmigrate) also moves other server components such as security settings and server parameter configurations. |
+| MySQL databases (>= 1 TB) to Azure Database for MySQL | Dump and Restore using **MyDumper/MyLoader** + High Compute VM | [Migrate large databases to Azure Database for MySQL using mydumper/myloader](concepts-migrate-mydumper-myloader.md) | [Best Practices for migrating large databases to Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699) |
+| MySQL databases (< 1 TB) to Azure Database for MySQL | Database Migration Service (DMS) and the Azure portal | [Migrate MySQL databases to Azure Database for MySQL using DMS](../../dms/tutorial-mysql-azure-mysql-offline-portal.md) | If network bandwidth between source and target is good (e.g: High-speed express route), use Azure DMS (database migration service) |
+| Amazon RDS for MySQL databases (< 1 TB) to Azure Database for MySQL | MySQL Workbench | [Migrate Amazon RDS for MySQL databases ( < 1 TB) to Azure Database for MySQL using MySQL Workbench](../single-server/how-to-migrate-rds-mysql-workbench.md) | If you have low network bandwidth between source and Azure, use **Mydumper/Myloader + High compute VM** to take advantage of compression settings to efficiently move data over low speed networks |
+| Import and export MySQL databases (< 1 TB) in Azure Database for MySQL | mysqldump or MySQL Workbench Import/Export utility | [Import and export - Azure Database for MySQL](../single-server/concepts-migrate-import-export.md) | Use the **mysqldump** and **MySQL Workbench Export/Import** utility tool to perform offline migrations for smaller databases. |
+
+### Online
+
+To help you select the right tools for migrating to Azure Database for MySQL - Flexible Server, consider the detail in the following table for online migrations.
+
+| Migration Scenario | Tool(s) | Details | More information |
+|--||||
+| Single to Flexible Server (Azure portal) | Database Migration Service (DMS) | [Tutorial: DMS with the Azure portal (online)](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) | Recommended |
+| Single to Flexible Server | Mydumper/Myloader with Data-in replication | [Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with open-source tools](how-to-migrate-single-flexible-minimum-downtime.md) | N/A |
+| Azure Database for MySQL Flexible Server Data-in replication | **Mydumper/Myloader with Data-in replication** | [Configure Data-in replication - Azure Database for MySQL Flexible Server](../flexible-server/how-to-data-in-replication.md) | N/A |
+
+## Next steps
+* [Migrate MySQL on-premises to Azure Database for MySQL](../migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro.md)
mysql How To Migrate Single Flexible Minimum Downtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/how-to-migrate-single-flexible-minimum-downtime.md
+
+ Title: "Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with open-source tools"
+description: This article describes how to perform a minimal-downtime migration of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server.
+++ Last updated : 09/06/2022+++++
+# Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with open-source tools
+
+You can migrate an instance of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimum downtime to your applications by using a combination of open-source tools such as mydumper/myloader and Data-in replication.
+
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+
+Data-in replication is a technique that replicates data changes from the source server to the destination server based on the binary log file position method. In this scenario, the MySQL instance operating as the source (on which the database changes originate) writes updates and changes as ΓÇ£eventsΓÇ¥ to the binary log. The information in the binary log is stored in different logging formats according to the database changes being recorded. Replicas are configured to read the binary log from the source and to execute the events in the binary log on the replica's local database.
+
+If you set up Data-in replication to synchronize data from one instance of Azure Database for MySQL to another, you can do a selective cutover of your applications from the primary (or source database) to the replica (or target database).
+
+In this tutorial, youΓÇÖll use mydumper/myloader and Data-in replication to migrate a sample database ([classicmodels](https://www.mysqltutorial.org/mysql-sample-database.aspx)) from an instance of Azure Database for MySQL - Single Server to an instance of Azure Database for MySQL - Flexible Server, and then synchronize data.
+
+In this tutorial, you learn how to:
+
+* Configure Network Settings for Data-in replication for different scenarios.
+* Configure Data-in replication between the primary and replica.
+* Test the replication.
+* Cutover to complete the migration.
+
+## Prerequisites
+
+To complete this tutorial, you need:
+
+* An instance of Azure Database for MySQL Single Server running version 5.7 or 8.0.
+ > [!Note]
+ > If you're running Azure Database for MySQL Single Server version 5.6, upgrade your instance to 5.7 and then configure data in replication. To learn more, see [Major version upgrade in Azure Database for MySQL - Single Server](../single-server/how-to-major-version-upgrade.md).
+* An instance of Azure Database for MySQL Flexible Server. For more information, see the article [Create an instance in Azure Database for MySQL Flexible Server](../flexible-server/quickstart-create-server-portal.md).
+ > [!Note]
+ > Configuring Data-in replication for zone redundant high availability servers is not supported. If you would like to have zone redundant HA for your target server, then perform these steps:
+ >
+ > 1. Create the server with Zone redundant HA enabled
+ > 2. Disable HA
+ > 3. Follow the article to setup data-in replication
+ > 4. Post cutover remove the Data-in replication configuration
+ > 5. Enable HA
+ >
+ > *Make sure that **[GTID_Mode](../flexible-server/concepts-read-replicas.md#global-transaction-identifier-gtid)** has the same setting on the source and target servers.*
+
+* To connect and create a database using MySQL Workbench. For more information, see the article [Use MySQL Workbench to connect and query data](../flexible-server/connect-workbench.md).
+* To ensure that you have an Azure VM running Linux in same region (or on the same VNet, with private access) that hosts your source and target databases.
+* To install mysql client or MySQL Workbench (the client tools) on your Azure VM. Ensure that you can connect to both the primary and replica server. For the purposes of this article, mysql client is installed.
+* To install mydumper/myloader on your Azure VM. For more information, see the article [mydumper/myloader](concepts-migrate-mydumper-myloader.md).
+* To download and run the sample database script for the [classicmodels](https://www.mysqltutorial.org/wp-content/uploads/2018/03/mysqlsampledatabase.zip) database on the source server.
+* Configure [binlog_expire_logs_seconds](../flexible-server/concepts-server-parameters.md#binlog_expire_logs_seconds) on the source server to ensure that binlogs arenΓÇÖt purged before the replica commit the changes. Post successful cut over you can reset the value.
+
+## Configure networking requirements
+
+To configure the Data-in replication, you need to ensure that the target can connect to the source over port 3306. Based on the type of endpoint set up on the source, perform the appropriate following steps.
+
+* If a public endpoint is enabled on the source, then ensure that the target can connect to the source by enabling ΓÇ£Allow access to Azure servicesΓÇ¥ in the firewall rule. To learn more, see [Firewall rules - Azure Database for MySQL](../single-server/concepts-firewall-rules.md#connecting-from-azure).
+* If a private endpoint and *[Deny public access](../single-server/concepts-data-access-security-private-link.md#deny-public-access-for-azure-database-for-mysql)* is enabled on the source, then install the private link in the same VNet that hosts the target. To learn more, see [Private Link - Azure Database for MySQL](../single-server/concepts-data-access-security-private-link.md).
+
+## Configure Data-in replication
+
+To configure Data in replication, perform the following steps:
+
+1. Sign in to the Azure VM on which you installed the mysql client tool.
+
+2. Connect to the source and target using the mysql client tool.
+
+3. Use the mysql client tool to determine whether log_bin is enabled on the source by running the following command:
+
+ ```sql
+ SHOW VARIABLES LIKE 'log_bin';
+ ```
+
+ > [!Note]
+ > With Azure Database for MySQL Single Server with the large storage, which supports up to 16TB, this enabled by default.
+
+ > [!Tip]
+ > With Azure Database for MySQL Single Server, which supports up to 4TB, this is not enabled by default. However, if you promote a [read replica](../single-server/how-to-read-replicas-portal.md) for the source server and then delete read replica, the parameter will be set to ON.
+
+4. Based on the SSL enforcement for the source server, create a user in the source server with the replication permission by running the appropriate command.
+
+ If youΓÇÖre using SSL, run the following command:
+
+ ```sql
+ CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
+ GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%' REQUIRE SSL;
+ ```
+
+ If youΓÇÖre not using SSL, run the following command:
+
+ ```sql
+ CREATE USER 'syncuser'@'%' IDENTIFIED BY 'yourpassword';
+ GRANT REPLICATION SLAVE ON *.* TO ' syncuser'@'%';
+ ```
+
+5. To back up the database using mydumper, run the following command on the Azure VM where we installed the mydumper\myloader:
+
+ ```bash
+ $ mydumper --host=<primary_server>.mysql.database.azure.com --user=<username>@<primary_server> --password=<Password> --outputdir=./backup --rows=100 -G -E -R -z --trx-consistency-only --compress --build-empty-files --threads=16 --compress-protocol --ssl --regex '^(classicmodels\.)' -L mydumper-logs.txt
+ ```
+
+ > [!Tip]
+ > The option **--trx-consistency-only** is a required for transactional consistency while we take backup.
+ >
+ > * The mydumper equivalent of mysqldumpΓÇÖs --single-transaction.
+ > * Useful if all your tables are InnoDB.
+ > * The ΓÇ£mainΓÇ¥ thread only needs to hold the global lock until the ΓÇ£dumpΓÇ¥ threads can start a transaction.
+ > * Offers the shortest duration of global locking
+
+ The ΓÇ£mainΓÇ¥ thread only needs to hold the global lock until the ΓÇ£dumpΓÇ¥ threads can start a transaction.
+
+ The variables in this command are explained below:
+
+ * **--host:** Name of the primary server
+ * **--user:** Name of a user (in the format username@servername since the primary server is running Azure Database for MySQL - Single Server). You can use server admin or a user having SELECT and RELOAD permissions.
+ * **--Password:** Password of the user above
+
+ For more information about using mydumper, see [mydumper/myloader](../single-server/concepts-migrate-mydumper-myloader.md)
+
+6. Read the metadata file to determine the binary log file name and offset by running the following command:
+
+ ```bash
+ $ cat ./backup/metadata
+ ```
+
+ In this command, **./backup** refers to the output directory used in the command in the previous step.
+
+ The results should appear as shown in the following image:
+
+ :::image type="content" source="./media/how-to-migrate-single-flexible-minimum-downtime/metadata.png" alt-text="Continuous sync with the Azure Database Migration Service":::
+
+ Make sure to note the binary file name for use in later steps.
+
+7. Restore the database using myloader by running the following command:
+
+ ```bash
+ $ myloader --host=<servername>.mysql.database.azure.com --user=<username> --password=<Password> --directory=./backup --queries-per-transaction=100 --threads=16 --compress-protocol --ssl --verbose=3 -e 2>myloader-logs.txt
+ ```
+
+ The variables in this command are explained below:
+
+ * **--host:** Name of the replica server
+ * **--user:** Name of a user. You can use server admin or a user with read\write permission capable of restoring the schemas and data to the database
+ * **--Password:** Password for the user above
+
+8. Depending on the SSL enforcement on the primary server, connect to the replica server using the mysql client tool and perform the following the steps.
+
+ * If SSL enforcement is enabled, then:
+
+ i. Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem).
+
+ ii. Open the file in notepad and paste the contents to the section ΓÇ£PLACE YOUR PUBLIC KEY CERTIFICATE'S CONTEXT HEREΓÇ£.
+
+ ```sql
+ SET @cert = '--BEGIN CERTIFICATE--
+ PLACE YOUR PUBLIC KEY CERTIFICATE'S CONTEXT HERE
+ --END CERTIFICATE--'
+ ```
+
+ iii. To configure Data in replication, run the following command:
+
+ ```sql
+ CALL mysql.az_replication_change_master('<Primary_server>.mysql.database.azure.com', '<username>@<primary_server>', '<Password>', 3306, '<File_Name>', <Position>, @cert);
+ ```
+
+ > [!Note]
+ > Determine the position and file name from the information obtained in step 6.
+
+ * If SSL enforcement isn't enabled, then run the following command:
+
+ ```sql
+ CALL mysql.az_replication_change_master('<Primary_server>.mysql.database.azure.com', '<username>@<primary_server>', '<Password>', 3306, '<File_Name>', <Position>, ΓÇÿΓÇÖ);
+ ```
+
+9. To start replication from the replica server, call the below stored procedure.
+
+ ```sql
+ call mysql.az_replication_start;
+ ```
+
+10. To check the replication status, on the replica server, run the following command:
+
+ ```sql
+ show slave status \G;
+ ```
+
+ > [!Note]
+ > If you're using MySQL Workbench the \G modifier is not required.
+
+ If the state of *Slave_IO_Running* and *Slave_SQL_Running* are Yes and the value of *Seconds_Behind_Master* is 0, then replication is working well. Seconds_Behind_Master indicates how late the replica is. If the value is something other than 0, then the replica is processing updates.
+
+## Testing the replication (optional)
+
+To confirm that Data-in replication is working properly, you can verify that the changes to the tables in primary were replicated to the replica.
+
+1. Identify a table to use for testing, for example, the Customers table, and then confirm that the number of entries it contains is the same on the primary and replica servers by running the following command on each:
+
+ ```
+ select count(*) from customers;
+ ```
+
+2. Make a note of the entry count for later comparison.
+
+ To test replication, try adding some data to the customer tables on the primary server and see then verify that the new data is replicated. In this case, youΓÇÖll add two rows to a table on the primary server and then confirm that they're replicated on the replica server.
+
+3. In the Customers table on the primary server, insert rows by running the following command:
+
+ ```sql
+ insert into `customers`(`customerNumber`,`customerName`,`contactLastName`,`contactFirstName`,`phone`,`addressLine1`,`addressLine2`,`city`,`state`,`postalCode`,`country`,`salesRepEmployeeNumber`,`creditLimit`) values
+ (<ID>,'name1','name2','name3 ','11.22.5555','54, Add',NULL,'Add1',NULL,'44000','country',1370,'21000.00');
+ ```
+
+4. To check the replication status, call the *show slave status \G* to confirm that replication is working as expected.
+
+5. To confirm that the count is the same, on the replica server, run the following command:
+
+ ```sql
+ select count(*) from customers;
+ ```
+
+## Ensure a successful cutover
+
+To ensure a successful cutover, perform the following tasks:
+
+1. Configure the appropriate server-level firewall and virtual network rules to connect to target Server. You can compare the firewall rules for the source and [target](../flexible-server/how-to-manage-firewall-portal.md#create-a-firewall-rule-after-server-is-created) from the portal.
+2. Configure appropriate logins and database level permissions in the target server. You can run *SELECT * FROM mysql.user;* on the source and target servers to compare.
+3. Make sure that all the incoming connections to Azure Database for MySQL Single Server are stopped.
+ > [!Tip]
+ > You can set the Azure Database for MySQL Single Server to read only.
+4. Ensure that the replica has caught up with the primary by running *show slave status \G* and confirming that the value for the *Seconds_Behind_Master* parameter is 0.
+5. Redirect clients and client applications to the target instance of Azure Database for MySQL Flexible Server.
+6. Perform the final cutover by running the mysql.az_replication_stop stored procedure, which will stop replication from the replica server.
+7. *Call mysql.az_replication_remove_master* to remove the Data-in replication configuration.
+
+At this point, your applications are connected to the new Azure Database for MySQL Flexible server and changes in the source will no longer replicate to the target.
+[Create and manage Azure Database for MySQL firewall rules by using the Azure portal](../single-server/how-to-manage-firewall-using-portal.md)
+## Next steps
+
+* Learn more about Data-in replication [Replicate data into Azure Database for MySQL Flexible Server](../flexible-server/concepts-data-in-replication.md) and [Configure Azure Database for MySQL Flexible Server Data-in replication](../flexible-server/how-to-data-in-replication.md)
+* Learn more about [troubleshooting common errors in Azure Database for MySQL](../single-server/how-to-troubleshoot-common-errors.md).
+* Learn more about [migrating MySQL to Azure Database for MySQL offline using Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-offline-portal.md).
mysql Quickstart Create Mysql Server Database Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-bicep.md
Title: 'Quickstart: Create an Azure DB for MySQL - Bicep'
description: In this Quickstart, learn how to create an Azure Database for MySQL server with virtual network integration using Bicep. +
Last updated 05/02/2022
Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. In this quickstart, you use Bicep to create an Azure Database for MySQL server with virtual network integration. You can create the server in the Azure portal, Azure CLI, or Azure PowerShell. ## Prerequisites
mysql App Development Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/app-development-best-practices.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + Here are some best practices to help you build a cloud-ready application by using Azure Database for MySQL. These best practices can reduce development time for your app. ## Configuration of application and database resources
mysql Concept Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-monitoring-best-practices.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + Learn about the best practices that can be used to monitor your database operations and ensure that the performance is not compromised as data size grows. As we add new capabilities to the platform, we will continue to refine the best practices detailed in this section. ## Layout of the current monitoring toolkit
mysql Concept Operation Excellence Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-operation-excellence-best-practices.md
Last updated 06/20/2022
-# Best practices for server operations on Azure Database for MySQL -Single server
+# Best practices for server operations on Azure Database for MySQL - Single server
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + Learn about the best practices for working with Azure Database for MySQL. As we add new capabilities to the platform, we will continue to focus on refining the best practices detailed in this section. ## Azure Database for MySQL Operational Guidelines
mysql Concept Performance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-performance-best-practices.md
Last updated 07/22/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + Learn how to get best performance while working with your Azure Database for MySQL server. As we add new capabilities to the platform, we'll continue refining our recommendations in this section. ## Physical proximity
mysql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concept-reserved-pricing.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + Azure Database for MySQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MySQL reserved instances, you make an upfront commitment on MySQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MySQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br> ## How does the instance reservation work?
mysql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-aks.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Kubernetes Service (AKS) provides a managed Kubernetes cluster you can use in Azure. Below are some options to consider when using AKS and Azure Database for MySQL together to create an application. ## Create Database before creating the AKS cluster
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-audit-logs.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + In Azure Database for MySQL, the audit log is available to users. The audit log can be used to track database-level activity and is commonly used for compliance. ## Configure audit logging
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-ad-authentication.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of connecting to Azure Database for MySQL using identities defined in Azure AD. With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
mysql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-azure-advisor-recommendations.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + Learn about how Azure Advisor is applied to Azure Database for MySQL and get answers to common questions. ## What is Azure Advisor for MySQL? The Azure Advisor system uses telemetry to issue performance and reliability recommendations for your MySQL database.
mysql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-backup.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL automatically creates server backups and stores them in user configured locally redundant or geo-redundant storage. Backups can be used to restore your server to a point-in-time. Backup and restore are an essential part of any business continuity strategy because they protect your data from accidental corruption or deletion. ## Backups
mysql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-business-continuity.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article describes the capabilities that Azure Database for MySQL provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your application requires maintenance. ## Features that you can use to provide business continuity
mysql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-certificate-rotation.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
-Azure Database for MySQL Single Server successfully completed the root certificate change on **February 15, 2021 (02/15/2021)** as part of standard maintenance and security best practices. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server.
+
+Azure Database for MySQL Single Server as part of standard maintenance and security best practices will complete the root certificate change starting October 2022. This article gives you more details about the changes, the resources affected, and the steps needed to ensure that your application maintains connectivity to your database server.
> [!NOTE] > This article applies to [Azure Database for MySQL - Single Server](single-server-overview.md) ONLY. For [Azure Database for MySQL - Flexible Server](../flexible-server/overview.md), the certificate needed to communicate over SSL is [DigiCert Global Root CA](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)
To avoid interruption of your application's availability as a result of certif
In the future, after the new certificate is deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem. > [!NOTE]
-> Please don't drop or alter **Baltimore certificate** until the cert change is made. We'll send a communication after the change is done, and then it will be safe to drop the **Baltimore certificate**.
+> Please don't drop or alter **Baltimore certificate** until the cert change is made. We'll send a communication after the change is done and then it will be safe to drop the **Baltimore certificate**.
#### What if we removed the BaltimoreCyberTrustRoot certificate?
You'll start to encounter connectivity errors while connecting to your Azure Dat
#### If I'm not using SSL/TLS, do I still need to update the root CA?
- No actions are required if you aren't using SSL/TLS.
+No actions are required if you aren't using SSL/TLS.
+
+#### When will my single server instance undergo root certificate change?
+
+The migration from **BaltimoreCyberTrustRoot** to **DigiCertGlobalRootG2** will be carried out across all regions of Azure starting **October 2022** in phases.
+To make sure that you do not lose connectivity to your server, follow the steps mentioned under [Create a combined CA certificate](#create-a-combined-ca-certificate).
+Combined CA certificate will allow connectivity over SSL to your single server instance with either of these two certificates.
++
+#### When can I remove BaltimoreCyberTrustRoot certificate completely?
+
+Once the migration is completed successfully across all Azure regions we'll send a communication post that you're safe to change single CA **DigiCertGlobalRootG2** certificate.
++
+#### I don't specify any CA cert while connecting to my single server instance over SSL, do I still need to perform [the steps](#create-a-combined-ca-certificate) mentioned above?
+
+If you have both the CA root cert in your [trusted root store](/windows-hardware/drivers/install/trusted-root-certification-authorities-certificate-store), then no further actions are required. This also applies to your client drivers that use local store for accessing root CA certificate.
+ #### If I'm using SSL/TLS, do I need to restart my database server to update the root CA?
For a connector using Self-hosted Integration Runtime where you explicitly inclu
No. Since the change is only on the client side to connect to the database server, there's no maintenance downtime needed for the database server for this change.
-#### If I create a new server after February 15, 2021 (02/15/2021), will I be impacted?
-
-For servers created after February 15, 2021 (02/15/2021), you will continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) for your applications to connect using SSL.
- #### How often does Microsoft update their certificates or what is the expiry policy? These certificates used by Azure Database for MySQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft will need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft will need to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant at all times.
mysql Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-compatibility.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article describes the drivers and management tools that are compatible with Azure Database for MySQL Single Server. > [!NOTE]
mysql Concepts Connect To A Gateway Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connect-to-a-gateway-node.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. Review [Connectivity architecture](./concepts-connectivity-architecture.md#connectivity-architecture) to learn more about gateways in Azure Database for MySQL service architecture. As Azure Database for MySQL supports major version v5.7 and v8.0, the default port 3306 to connect to Azure Database for MySQL runs MySQL client version 5.6 (least common denominator) to support connections to servers of all 2 supported major versions. However, if your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string.
mysql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connection-libraries.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article lists each library or driver that client programs can use when connecting to Azure Database for MySQL. ## Client interfaces
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article explains the Azure Database for MySQL connectivity architecture and how the traffic is directed to your Azure Database for MySQL instance from clients both within and outside Azure. ## Connectivity architecture
mysql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article describes how to handle transient errors and connect efficiently to Azure Database for MySQL. ## Transient errors
mysql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-access-and-security-vnet.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + *Virtual network rules* are one firewall security feature that controls whether your Azure Database for MySQL server accepts communications that are sent from particular subnets in virtual networks. This article explains why the virtual network rule feature is sometimes your best option for securely allowing communication to your Azure Database for MySQL server. To create a virtual network rule, there must first be a [virtual network][vm-virtual-network-overview] (VNet) and a [virtual network service endpoint][vm-virtual-network-service-endpoints-overview-649d] for the rule to reference. The following picture illustrates how a Virtual Network service endpoint works with Azure Database for MySQL:
mysql Concepts Data Access Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-access-security-private-link.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Private Link allows you to connect to various PaaS services in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet. For a list to PaaS services that support Private Link functionality, review the Private Link [documentation](../../private-link/index.yml). A private endpoint is a private IP address within a specific [VNet](../../virtual-network/virtual-networks-overview.md) and Subnet.
mysql Concepts Data Encryption Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-encryption-mysql.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Data encryption with customer-managed keys for Azure Database for MySQL enables you to bring your own key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in a full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys. Data encryption with customer-managed keys for Azure Database for MySQL, is set at the server-level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](../../key-vault/general/security-features.md) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) is described in more detail later in this article.
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-data-in-replication.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Data-in Replication allows you to synchronize data from an external MySQL server into the Azure Database for MySQL service. The external server can be on-premises, in virtual machines, or a database service hosted by other cloud providers. Data-in Replication is based on the binary log (binlog) file position-based or GTID-based replication native to MySQL. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html). ## When to use Data-in Replication
mysql Concepts Database Application Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-database-application-development.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article discusses design considerations that a developer should follow when writing application code to connect to Azure Database for MySQL. > [!TIP]
mysql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-firewall-rules.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Firewalls prevent all access to your database server until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request. To configure a firewall, create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level.
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-high-availability.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + The Azure Database for MySQL service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/mysql) uptime. Azure Database for MySQL provides high availability during planned events such as user-initiated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for MySQL can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service. Azure Database for MySQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
mysql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-infrastructure-double-encryption.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL uses storage [encryption of data at-rest](concepts-security.md#at-rest) for data using Microsoft's managed keys. Data, including backups, are encrypted on disk and this encryption is always on and can't be disabled. The encryption uses FIPS 140-2 validated cryptographic module and an AES 256-bit cipher for the Azure storage encryption. Infrastructure double encryption adds a second layer of encryption using service-managed keys. It uses FIPS 140-2 validated cryptographic module, but with a different encryption algorithm. This provides an additional layer of protection for your data at rest. The key used in Infrastructure double encryption is also managed by the Azure Database for MySQL service. Infrastructure double encryption is not enabled by default since the additional layer of encryption can have a performance impact.
mysql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-limits.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + The following sections describe capacity, storage engine support, privilege support, data manipulation statement support, and functional limits in the database service. Also see [general limitations](https://dev.mysql.com/doc/mysql-reslimits-excerpt/5.6/en/limits.html) applicable to the MySQL database engine. ## Server parameters
mysql Concepts Migrate Dbforge Studio For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-dbforge-studio-for-mysql.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Looking to move your MySQL databases to Azure Database for MySQL? Consider using the migration tools in dbForge Studio for MySQL. With it, database transfer can be configured, saved, edited, automated, and scheduled. To complete the examples in this article, you'll need to download and install [dbForge Studio for MySQL](https://www.devart.com/dbforge/mysql/studio/).
mysql Concepts Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-dump-restore.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + This article explains two common ways to back up and restore databases in your Azure Database for MySQL - Dump and restore from the command-line (using mysqldump) - Dump and restore using PHPMyAdmin
mysql Concepts Migrate Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-import-export.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article explains two common approaches to importing and exporting data to an Azure Database for MySQL server by using MySQL Workbench. For detailed and comprehensive migration guidance, see the [migration guide resources](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-monitoring.md
Last updated 06/20/2022
# Monitoring in Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for MySQL provides various metrics that give insight into the behavior of your server. ## Metrics
mysql Concepts Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-performance-recommendations.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + **Applies to:** Azure Database for MySQL 5.7, 8.0 The Performance Recommendations feature analyzes your databases to create customized suggestions for improved performance. To produce the recommendations, the analysis looks at various database characteristics including schema. Enable [Query Store](concepts-query-store.md) on your server to fully utilize the Performance Recommendations feature. If performance schema is OFF, turning on Query Store enables performance_schema and a subset of performance schema instruments required for the feature. After implementing any performance recommendation, you should test performance to evaluate the impact of those changes.
mysql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-planned-maintenance-notification.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Learn how to prepare for planned maintenance events on your Azure Database for MySQL. ## What is a planned maintenance?
mysql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-pricing-tiers.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + You can create an Azure Database for MySQL server in one of three different service tiers: Basic, General Purpose, and Memory Optimized. The service tiers are differentiated by the amount of compute in vCores that can be provisioned, memory per vCore, and the storage technology used to store the data. All resources are provisioned at the MySQL server level. A server can have one or many databases. | Attribute | **Basic** | **General Purpose** | **Memory Optimized** |
mysql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-performance-insight.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + **Applies to:** Azure Database for MySQL 5.7, 8.0 Query Performance Insight helps you to quickly identify what your longest running queries are, how they change over time, and what waits are affecting them.
mysql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-query-store.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + **Applies to:** Azure Database for MySQL 5.7, 8.0 The Query Store feature in Azure Database for MySQL provides a way to track query performance over time. Query Store simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It separates data by time windows so that you can see database usage patterns. Data for all users, databases, and queries is stored in the **mysql** schema database in the Azure Database for MySQL instance.
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-read-replicas.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + The read replica feature allows you to replicate data from an Azure Database for MySQL server to a read-only server. You can replicate from the source server to up to five replicas. Replicas are updated asynchronously using the MySQL engine's native binary log (binlog) file position-based replication technology. To learn more about binlog replication, see the [MySQL binlog replication overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html). Replicas are new servers that you manage similar to regular Azure Database for MySQL servers. For each read replica, you're billed for the provisioned compute in vCores and storage in GB/ month.
mysql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-security.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + There are multiple layers of security that are available to protect the data on your Azure Database for MySQL server. This article outlines those security options. ## Information protection and encryption
mysql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-logs.md
Last updated 06/20/2022
# Slow query logs in Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ In Azure Database for MySQL, the slow query log is available to users. Access to the transaction log is not supported. The slow query log can be used to identify performance bottlenecks for troubleshooting. For more information about the MySQL slow query log, see the MySQL reference manual's [slow query log section](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html).
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-server-parameters.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article provides considerations and guidelines for configuring server parameters in Azure Database for MySQL. ## What are server parameters?
mysql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-servers.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article provides considerations and guidelines for working with Azure Database for MySQL servers. ## What is an Azure Database for MySQL server?
mysql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-ssl-connection-security.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL supports connecting your database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application. > [!NOTE]
mysql Concepts Troubleshooting Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-troubleshooting-best-practices.md
Last updated 07/22/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + Use the sections below to keep your MySQL databases running smoothly and use this information as guiding principles for ensuring that the schemas are designed optimally and provide the best performance for your applications. ## Check the number of indexes
mysql Connect Cpp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-cpp.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C++ application. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes you're familiar with developing using C++ and you're new to working with Azure Database for MySQL. ## Prerequisites
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-csharp.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This quickstart demonstrates how to connect to an Azure Database for MySQL by using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database. ## Prerequisites
mysql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-go.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This quickstart demonstrates how to connect to an Azure Database for MySQL from Windows, Ubuntu Linux, and Apple macOS platforms by using code written in the [Go](https://go.dev/) language. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes that you are familiar with development using Go and that you are new to working with Azure Database for MySQL. ## Prerequisites
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md
Last updated 08/15/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for MySQL](./index.yml). JDBC is the standard Java API to connect to traditional relational databases.
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-nodejs.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + In this quickstart, you connect to an Azure Database for MySQL by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms. This topic assumes that you're familiar with developing using Node.js, but you're new to working with Azure Database for MySQL.
mysql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-php.md
Last updated 06/20/2022
# Quickstart: Use PHP to connect and query data in Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ This quickstart demonstrates how to connect to an Azure Database for MySQL using a [PHP](https://secure.php.net/manual/intro-whatis.php) application. It shows how to use SQL statements to query, insert, update, and delete data in the database. ## Prerequisites
mysql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-python.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + In this quickstart, you connect to an Azure Database for MySQL by using Python. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms. ## Prerequisites
mysql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-ruby.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This quickstart demonstrates how to connect to an Azure Database for MySQL using a [Ruby](https://www.ruby-lang.org) application and the [mysql2](https://rubygems.org/gems/mysql2) gem from Windows, Ubuntu Linux, and Mac platforms. It shows how to use SQL statements to query, insert, update, and delete data in the database. This topic assumes that you are familiar with development using Ruby and that you are new to working with Azure Database for MySQL. ## Prerequisites
mysql Connect Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-workbench.md
Title: 'Quickstart: Connect - MySQL Workbench - Azure Database for MySQL'
description: This Quickstart provides the steps to use MySQL Workbench to connect and query data from Azure Database for MySQL. + - Last updated 06/20/2022
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This quickstart demonstrates how to connect to an Azure Database for MySQL using the MySQL Workbench application. ## Prerequisites
To connect to Azure MySQL Server by using the GUI tool MySQL Workbench:
| **Setting** | **Suggested value** | **Field description** | ||||
-| Connection Name | Demo Connection | Specify a label for this connection. |
+| Connection Name | Demo Connection | Specify a label for this connection. |
| Connection Method | Standard (TCP/IP) | Standard (TCP/IP) is sufficient. | | Hostname | *server name* | Specify the server name value that was used when you created the Azure Database for MySQL earlier. Our example server shown is mydemoserver.mysql.database.azure.com. Use the fully qualified domain name (\*.mysql.database.azure.com) as shown in the example. Follow the steps in the previous section to get the connection information if you do not remember your server name. | | Port | 3306 | Always use port 3306 when connecting to Azure Database for MySQL. |
mysql How To Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-alert-on-metric.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article shows you how to set up Azure Database for MySQL alerts using the Azure portal. You can receive an alert based on monitoring metrics for your Azure services. The alert triggers when the value of a specified metric crosses a threshold you assign. The alert triggers both when the condition is first met, and then afterwards when that condition is no longer being met.
mysql How To Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-cli.md
Last updated 06/20/2022
# Auto-grow Azure Database for MySQL storage using the Azure CLI [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ This article describes how you can configure an Azure Database for MySQL server storage to grow without impacting the workload. The server [reaching the storage limit](./concepts-pricing-tiers.md#reaching-the-storage-limit), is set to read-only. If storage auto grow is enabled then for servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply.
mysql How To Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-portal.md
Last updated 06/20/2022
# Auto grow storage in Azure Database for MySQL using the Azure portal [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ This article describes how you can configure an Azure Database for MySQL server storage to grow without impacting the workload. When a server reaches the allocated storage limit, the server is marked as read-only. However, if you enable storage auto grow, the server storage increases to accommodate the growing data. For servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply.
mysql How To Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-powershell.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article describes how you can configure an Azure Database for MySQL server storage to grow without impacting the workload.
mysql How To Configure Audit Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-cli.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + You can configure the [Azure Database for MySQL audit logs](concepts-audit-logs.md) from the Azure CLI. ## Prerequisites
mysql How To Configure Audit Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-portal.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + You can configure the [Azure Database for MySQL audit logs](concepts-audit-logs.md) and diagnostic settings from the Azure portal. ## Prerequisites
mysql How To Configure Private Link Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-private-link-cli.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure CLI to create a VM in an Azure Virtual Network and an Azure Database for MySQL server with an Azure private endpoint. > [!NOTE]
mysql How To Configure Private Link Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-private-link-portal.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + A Private Endpoint is the fundamental building block for private link in Azure. It enables Azure resources, like Virtual Machines (VMs), to communicate privately with private link resources. In this article, you will learn how to use the Azure portal to create a VM in an Azure Virtual Network and an Azure Database for MySQL server with an Azure private endpoint. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
mysql How To Configure Server Logs In Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-logs-in-cli.md
Last updated 06/20/2022
# Configure and access slow query logs by using Azure CLI [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ You can download the Azure Database for MySQL slow query logs by using Azure CLI, the Azure command-line utility. ## Prerequisites
mysql How To Configure Server Logs In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-logs-in-portal.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + You can configure, list, and download the [Azure Database for MySQL slow query logs](concepts-server-logs.md) from the Azure portal. ## Prerequisites
mysql How To Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-cli.md
# Configure server parameters in Azure Database for MySQL using the Azure CLI [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ You can list, show, and update configuration parameters for an Azure Database for MySQL server by using Azure CLI, the Azure command-line utility. A subset of engine configurations is exposed at the server-level and can be modified. >[!Note]
mysql How To Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-powershell.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + You can list, show, and update configuration parameters for an Azure Database for MySQL server using PowerShell. A subset of engine configurations is exposed at the server-level and can be modified.
mysql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-sign-in-azure-ad-authentication.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article will walk you through the steps how to configure Azure Active Directory access with Azure Database for MySQL, and how to connect using an Azure AD token. > [!IMPORTANT]
mysql How To Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-ssl.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and your application. ## Step 1: Obtain SSL certificate
mysql How To Connect Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-overview-single-server.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + The following document includes links to examples showing how to connect and query with Azure Database for MySQL Single Server. This guide also includes TLS recommendations and libraries that you can use to connect to the server in supported languages below. ## Quickstarts
mysql How To Connect Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-webapp.md
Last updated 06/20/2022
# Connect an existing Azure App Service to Azure Database for MySQL server [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ This topic explains how to connect an existing Azure App Service to your Azure Database for MySQL server. ## Before you begin
mysql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-with-managed-identity.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article shows you how to use a user-assigned identity for an Azure Virtual Machine (VM) to access an Azure Database for MySQL server. Managed Service Identities are automatically managed by Azure and enable you to authenticate to services that support Azure AD authentication, without needing to insert credentials into your code. You learn how to:
mysql How To Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connection-string-powershell.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article demonstrates how to generate a connection string for an Azure Database for MySQL server. You can use a connection string to connect to an Azure Database for MySQL from many different applications.
mysql How To Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connection-string.md
Last updated 06/20/2022
# How to connect applications to Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ This topic lists the connection string types that are supported by Azure Database for MySQL, together with templates and examples. You might have different parameters and settings in your connection string. - To obtain the certificate, see [How to configure SSL](./how-to-configure-ssl.md).
mysql How To Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-create-manage-server-portal.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article shows you how to manage your Azure Database for MySQL servers. Management tasks include compute and storage scaling, admin password reset, and viewing server details. > [!NOTE]
mysql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-create-users.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + This article describes how to create users for Azure Database for MySQL. > [!NOTE]
mysql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-cli.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Learn how to use the Azure CLI to set up and manage data encryption for your Azure Database for MySQL. ## Prerequisites for Azure CLI
mysql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-portal.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Learn how to use the Azure portal to set up and manage data encryption for your Azure Database for MySQL. ## Prerequisites for Azure CLI
mysql How To Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-troubleshoot.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article describes how to identify and resolve common issues that can occur in Azure Database for MySQL when configured with data encryption using a customer-managed key. ## Introduction
mysql How To Data Encryption Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-validation.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article helps you validate that data encryption using customer managed key for Azure Database for MySQL is working as expected. ## Check the encryption status
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-in-replication.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article describes how to set up [Data-in Replication](concepts-data-in-replication.md) in Azure Database for MySQL by configuring the source and replica servers. This article assumes that you have some prior experience with MySQL servers and databases. > [!NOTE]
mysql How To Decide On Right Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-decide-on-right-migration-tools.md
- Title: "Select the right tools for migration to Azure Database for MySQL"
-description: "This topic provides a decision table which helps customers in picking the right tools for migrating into Azure Database for MySQL"
------- Previously updated : 06/20/2022--
-# Select the right tools for migration to Azure Database for MySQL
--
-## Overview
-
-Migrations are multi-step projects that are tough to pull off. Migrating database servers across platforms involves more than data and schema migration. There are also several other components, such as server configuration parameters, networking, access control rules, etc., to move. These are required to ensure that the functionality of the database server in the new target platform mimics the source.
-
-For detailed information and use cases about migrating databases to Azure Database for MySQL, you can refer to the [Database Migration Guide](../migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro.md). This document provides pointers that will help you successfully plan and execute a MySQL migration to Azure.
-
-In general, migrations can be categorized as either offline or online.
--- With an offline migration, the source server is taken offline and a dump and restore of the databases is performed on the target server. --- With an online migration (migration with minimal downtime), the source server allows updates, and the migration solution will take care of replicating the ongoing changes between the source and target server along with the initial dump and restore on the target. -
-If your application can afford some downtime, offline migrations are always the preferred choice, as they are simple and easy to execute. However, if your application can only afford minimal downtime, an online migration is the best choice. Migrations of the majority of OLTP systems, such as payment processing and e-commerce, fall into this category.
-
-## Decision table
-
-To help you with selecting the right tools for migrating to Azure Database for MySQL, consider the detail in the following table.
-
-| Scenarios | Recommended Tools | Links |
-|-|||
-| Offline Migrations to move databases >= 1 TB | Dump and Restore using **MyDumper/MyLoader** + High Compute VM | [Migrate large databases to Azure Database for MySQL using mydumper/myloader](concepts-migrate-mydumper-myloader.md) <br><br> [Best Practices for migrating large databases to Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699)|
-| Offline Migrations to move databases < 1TB | If network bandwidth between source and target is good (e.g: Highspeed express route), use **Azure DMS** (database migration service) <br><br> **-OR-** <br><br> If you have low network bandwidth between source and Azure, use **Mydumper/Myloader + High compute VM** to take advantage of compression settings to efficiently move data over low speed networks <br><br> **-OR-** <br><br> Use **mysqldump** and **MySQL Workbench Export/Import** utility to perform offline migrations for smaller databases. | [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS - Azure Database Migration Service](../../dms/tutorial-mysql-azure-mysql-offline-portal.md)<br><br> [Migrate Amazon RDS for MySQL to Azure Database for MySQL using MySQL Workbench](how-to-migrate-rds-mysql-workbench.md)<br><br> [Import and export - Azure Database for MySQL](concepts-migrate-import-export.md)|
-| Online Migration | **Mydumper/Myloader with Data-in replication** <br><br> **Mysqldump with data-in replication** can be considered for small databases( less than 100GB). These methods are applicable to both external and intra-platform migrations. | [Configure Data-in replication - Azure Database for MySQL Flexible Server](../flexible-server/how-to-data-in-replication.md) <br><br> [Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime](how-to-migrate-single-flexible-minimum-downtime.md) |
-|Single to Flexible Server Migrations | **Offline**: Custom shell script hosted in [GitHub](https://github.com/Azure/azure-mysql/tree/master/azuremysqltomysqlmigrate) This script also moves other server components such as security settings and server parameter configurations. <br><br>**Online**: **Mydumper/Myloader with Data-in replication** | [Migrate from Azure Database for MySQL - Single Server to Flexible Server in 5 easy steps!](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/migrate-from-azure-database-for-mysql-single-server-to-flexible/ba-p/2674057)<br><br> [Tutorial: Migrate Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server with minimal downtime](how-to-migrate-single-flexible-minimum-downtime.md)|
-
-## Next steps
-* [Migrate MySQL on-premises to Azure Database for MySQL](../migrate/mysql-on-premises-azure-db/01-mysql-migration-guide-intro.md)
-
-<br><br>
mysql How To Deny Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-deny-public-network-access.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article describes how you can configure an Azure Database for MySQL server to deny all public configurations and allow only connections through private endpoints to further enhance the network security. ## Prerequisites
mysql How To Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-double-encryption.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Learn how to use the how set up and manage Infrastructure double encryption for your Azure Database for MySQL. ## Prerequisites
mysql How To Fix Corrupt Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-fix-corrupt-database.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Database corruption can cause downtime for your application. It's also critical to resolve corruption problems in time to avoid data loss. When database corruption occurs, you'll see this error in your server logs: `InnoDB: Database page corruption on disk or a failed.` In this article, you'll learn how to resolve database or table corruption problems. Azure Database for MySQL uses the InnoDB engine. It features automated corruption checking and repair operations. InnoDB checks for corrupt pages by running checksums on every page it reads. If it finds a checksum discrepancy, it will automatically stop the MySQL server.
mysql How To Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-major-version-upgrade.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + > [!NOTE] > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we will remove it from this article.
mysql How To Manage Firewall Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-firewall-using-cli.md
# Create and manage Azure Database for MySQL firewall rules by using the Azure CLI [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ Server-level firewall rules can be used to manage access to an Azure Database for MySQL Server from a specific IP address or a range of IP addresses. Using convenient Azure CLI commands, you can create, update, delete, list, and show firewall rules to manage your server. For an overview of Azure Database for MySQL firewalls, see [Azure Database for MySQL server firewall rules](./concepts-firewall-rules.md). Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure CLI](how-to-manage-vnet-using-cli.md).
mysql How To Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-firewall-using-portal.md
Last updated 06/20/2022
# Create and manage Azure Database for MySQL firewall rules by using the Azure portal [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ Server-level firewall rules can be used to manage access to an Azure Database for MySQL Server from a specified IP address or a range of IP addresses. Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure portal](how-to-manage-vnet-using-portal.md).
mysql How To Manage Single Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-single-server-cli.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article shows you how to manage your Single servers deployed in Azure. Management tasks include compute and storage scaling, admin password reset, and viewing server details. ## Prerequisites
mysql How To Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-vnet-using-cli.md
# Create and manage Azure Database for MySQL VNet service endpoints using Azure CLI [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for MySQL server. Using convenient Azure CLI commands, you can create, update, delete, list, and show VNet service endpoints and rules to manage your server. For an overview of Azure Database for MySQL VNet service endpoints, including limitations, see [Azure Database for MySQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for MySQL. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
mysql How To Manage Vnet Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-vnet-using-portal.md
Last updated 06/20/2022
# Create and manage Azure Database for MySQL VNet service endpoints and VNet rules by using the Azure portal [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for MySQL server. For an overview of Azure Database for MySQL VNet service endpoints, including limitations, see [Azure Database for MySQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for MySQL. > [!NOTE]
mysql How To Migrate Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-online.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + You can perform MySQL migrations to Azure Database for MySQL with minimal downtime by using Data-in replication, which limits the amount of downtime that is incurred by the application. You can also refer to [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide) for detailed information and use cases about migrating databases to Azure Database for MySQL. This guide provides guidance that will lead the successful planning and execution of a MySQL migration to Azure.
mysql How To Migrate Rds Mysql Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-rds-mysql-data-in-replication.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + > [!NOTE] > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
mysql How To Migrate Rds Mysql Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-migrate-rds-mysql-workbench.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + You can use various utilities, such as MySQL Workbench Export/Import, Azure Database Migration Service (DMS), and MySQL dump and restore, to migrate Amazon RDS for MySQL to Azure Database for MySQL. However, using the MySQL Workbench Migration Wizard provides an easy and convenient way to move your Amazon RDS for MySQL databases to Azure Database for MySQL. With the Migration Wizard, you can conveniently select which schemas and objects to migrate. It also allows you to view server logs to identify errors and bottlenecks in real time. As a result, you can edit and modify tables or database structures and objects during the migration process when an error is detected, and then resume migration without having to restart from scratch.
mysql How To Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-move-regions-portal.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + There are various scenarios for moving an existing Azure Database for MySQL server from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning. You can use an Azure Database for MySQL [cross-region read replica](concepts-read-replicas.md#cross-region-replication) to complete the move to another region. To do so, first create a read replica in the target region. Next, stop replication to the read replica server to make it a standalone server that accepts both read and write traffic.
mysql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-cli.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL service using the Azure CLI and REST API. To learn more about read replicas, see the [overview](concepts-read-replicas.md). ## Azure CLI
mysql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-portal.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + In this article, you will learn how to create and manage read replicas in the Azure Database for MySQL service using the Azure portal. ## Prerequisites
mysql How To Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-powershell.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + In this article, you learn how to create and manage read replicas in the Azure Database for MySQL service using PowerShell. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
mysql How To Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-redirection.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This topic explains how to connect an application your Azure Database for MySQL server with redirection mode. Redirection aims to reduce network latency between client applications and MySQL servers by allowing applications to connect directly to backend server nodes. ## Before you begin
mysql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-cli.md
# Restart Azure Database for MySQL server using the Azure CLI [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ This topic describes how you can restart an Azure Database for MySQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation. The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
mysql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-portal.md
Last updated 06/20/2022
# Restart Azure Database for MySQL server using Azure portal [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ This topic describes how you can restart an Azure Database for MySQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation. The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
mysql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-powershell.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This topic describes how you can restart an Azure Database for MySQL server. You may need to restart your server for maintenance reasons, which causes a short outage during the operation.
mysql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-dropped-server.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + When a server is deleted, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a deleted MySQL server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system. ## Pre-requisites
mysql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-cli.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server. ## Prerequisites
mysql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-portal.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + ## Backup happens automatically Azure Database for MySQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server.
mysql How To Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-powershell.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL servers is backed up periodically to enable restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server.
mysql How To Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-server-parameters.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL supports configuration of some server parameters. This article describes how to configure these parameters by using the Azure portal. Not all server parameters can be adjusted. >[!Note]
mysql How To Stop Start Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-stop-start-server.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + > [!IMPORTANT] > When you **Stop** the server it remains in that state for the next 7 days in a stretch. If you do not manually **Start** it during this time, the server will automatically be started at the end of 7 days. You can choose to **Stop** it again if you are not using the server.
mysql How To Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-tls-configurations.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This article describes how you can configure an Azure Database for MySQL server to enforce minimum TLS version allowed for connections to go through and deny all connections with lower TLS version than configured minimum TLS version thereby enhancing the network security. You can enforce TLS version for connecting to their Azure Database for MySQL. Customers now have a choice to set the minimum TLS version for their database server. For example, setting this Minimum TLS version to 1.0 means you shall allow clients connecting using TLS 1.0,1.1 and 1.2. Alternatively, setting this to 1.2 means that you only allow clients connecting using TLS 1.2+ and all incoming connections with TLS 1.0 and TLS 1.1 will be rejected.
mysql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-common-connection-issues.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + Connection problems may be caused by a variety of things, including: * Firewall settings
mysql How To Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-common-errors.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + Azure Database for MySQL is a fully managed service powered by the community version of MySQL. The MySQL experience in a managed service environment may differ from running MySQL in your own environment. In this article, you'll see some of the common errors users may encounter while migrating to or developing on Azure Database for MySQL for the first time. ## Common Connection Errors
mysql How To Troubleshoot Connectivity Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-connectivity-issues.md
Last updated 07/22/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + The MySQL Community Edition manages connections using one thread per connection. As a result, each user connection gets a dedicated operating system thread in the mysqld process. There are potential issues associated with this type of connection handling. For example, memory use is relatively high if there's a large number of user connections, even if they're idle connections. In addition, thereΓÇÖs a higher level of internal server contention and context switching overhead when working with thousands of user connections.
mysql How To Troubleshoot High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-high-cpu-utilization.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + Azure Database for MySQL provides a range of metrics that you can use to identify resource bottlenecks and performance issues on the server. To determine whether your server is experiencing high CPU utilization, monitor metrics such as ΓÇ£Host CPU percentΓÇ¥, ΓÇ£Total ConnectionsΓÇ¥, ΓÇ£Host Memory PercentΓÇ¥, and ΓÇ£IO PercentΓÇ¥. At times, viewing a combination of these metrics will provide insights into what might be causing the increased CPU utilization on your Azure Database for MySQL server. For example, consider a sudden surge in connections that initiates surge of database queries that cause CPU utilization to shoot up.
mysql How To Troubleshoot Low Memory Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-low-memory-issues.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + To help ensure that a MySQL database server performs optimally, it's very important to have the appropriate memory allocation and utilization. By default, when you create an instance of Azure Database for MySQL, the available physical memory is dependent on the tier and size you select for your workload. In addition, memory is allocated for buffers and caches to improve database operations. For more information, see [How MySQL Uses Memory](https://dev.mysql.com/doc/refman/5.7/en/memory-use.html). Note that the Azure Database for MySQL service consumes memory to achieve as much cache hit as possible. As a result, memory utilization can often hover between 80- 90% of the available physical memory of an instance. Unless there's an issue with the progress of the query workload, it isn't a concern. However, you may run into out of memory issues for reasons such as that you have:
mysql How To Troubleshoot Query Performance New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-query-performance-new.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + Query performance can be impacted by multiple factors, so itΓÇÖs first important to look at the scope of the symptoms youΓÇÖre experiencing in your Azure Database for MySQL server. For example, is query performance slow for: * All queries running on the Azure Database for MySQL server?
mysql How To Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-query-performance.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + **EXPLAIN** is a handy tool that can help you optimize queries. You can use an EXPLAIN statement to get information about how SQL statements are run. The following shows example output from running an EXPLAIN statement. ```sql
mysql How To Troubleshoot Replication Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-replication-latency.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + The [read replica](concepts-read-replicas.md) feature allows you to replicate data from an Azure Database for MySQL server to a read-only replica server. You can scale out workloads by routing read and reporting queries from the application to replica servers. This setup reduces the pressure on the source server. It also improves overall performance and latency of the application as it scales. Replicas are updated asynchronously by using the MySQL engine's native binary log (binlog) file position-based replication technology. For more information, see [MySQL binlog file position-based replication configuration overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
mysql How To Troubleshoot Sys Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-sys-schema.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + The MySQL performance_schema, first available in MySQL 5.5, provides instrumentation for many vital server resources such as memory allocation, stored programs, metadata locking, etc. However, the performance_schema contains more than 80 tables, and getting the necessary information often requires joining tables within the performance_schema, and tables from the information_schema. Building on both performance_schema and information_schema, the sys_schema provides a powerful collection of [user-friendly views](https://dev.mysql.com/doc/refman/5.7/en/sys-schema-views.html) in a read-only database and is fully enabled in Azure Database for MySQL version 5.7. :::image type="content" source="./media/how-to-troubleshoot-sys-schema/sys-schema-views.png" alt-text="views of sys_schema":::
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/overview.md
Previously updated : 06/20/2022 Last updated : 06/20/2022 # What is Azure Database for MySQL?
Flexible servers are best suited for
For detailed overview of flexible server deployment mode, refer [flexible server overview](../flexible-server/overview.md). For latest updates on Flexible Server, refer to [What's new in Azure Database for MySQL - Flexible Server](../flexible-server/whats-new.md).
-### Azure Database for MySQL - Single Server
+### Azure Database for MySQL - Single Server
+ Azure Database for MySQL Single Server is a fully managed database service designed for minimal customization. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of MySQL 5.6 (retired), 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
-Single servers are best suited **only for existing applications already leveraging single server**. For all new developments or migrations, Flexible Server would be the recommended deployment option. To learn about the differences between Flexible Server and Single Server deployment options, refer [select the right deployment option for you](select-right-deployment-type.md) documentation.
+Single servers are best suited **only for existing applications already leveraging single server**. For all new developments or migrations, Flexible Server would be the recommended deployment option. To learn about the differences between Flexible Server and Single Server deployment options, refer to [select the right deployment option for your documentation](select-right-deployment-type.md).
For detailed overview of single server deployment mode, refer [single server overview](single-server-overview.md). For latest updates on Flexible Server, refer to [What's new in Azure Database for MySQL - Single Server](single-server-whats-new.md).
mysql Partners Migration Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/partners-migration-mysql.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + To broadly support your Azure Database for MySQL solution, you can choose from a wide variety of industry-leading partners and tools. This article highlights Microsoft partners with migration solutions that support Azure Database for MySQL. ## Migration partners
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Last updated 09/12/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This page is an index of [Azure Policy](../../governance/policy/overview.md) built-in policy definitions for Azure Database for MySQL. For additional Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../../governance/policy/samples/built-in-policies.md). The name of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the **Version** column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
mysql Quickstart Create Mysql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-arm-template.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Database for MySQL server with virtual network integration. You can create the server in the Azure portal, Azure CLI, or Azure PowerShell. [!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
mysql Quickstart Create Mysql Server Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-cli.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + > [!TIP] > Consider using the simpler [az mysql up](/cli/azure/mysql#az-mysql-up) Azure CLI command (currently in preview). Try out the [quickstart](./quickstart-create-server-up-azure-cli.md).
mysql Quickstart Create Mysql Server Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-portal.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. This quickstart shows you how to use the Azure portal to create an Azure Database for MySQL single server. It also shows you how to connect to the server. ## Prerequisites
mysql Quickstart Create Mysql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-powershell.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + This quickstart describes how to use PowerShell to create an Azure Database for MySQL server in an Azure resource group. You can use PowerShell to create and manage Azure resources interactively or in scripts.
mysql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-server-up-azure-cli.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + > [!IMPORTANT] > The [az mysql up](/cli/azure/mysql#az-mysql-up) Azure CLI command is in preview.
mysql Quickstart Mysql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-mysql-github-actions.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + Get started with [GitHub Actions](https://docs.github.com/en/actions) by using a workflow to deploy database updates to [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/). ## Prerequisites
mysql Reference Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/reference-stored-procedures.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Stored procedures are available on Azure Database for MySQL servers to help manage your MySQL server. This includes managing your server's connections, queries, and setting up Data-in Replication. ## Data-in Replication stored procedures
mysql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/sample-scripts-azure-cli.md
Last updated 06/20/2022
# Azure CLI samples for Azure Database for MySQL [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]++ The following table includes links to sample Azure CLI scripts for Azure Database for MySQL. | Sample link | Description |
mysql Sample Scripts Java Connection Pooling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/sample-scripts-java-connection-pooling.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + The below sample code illustrates connection pooling in Java. ```java
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md
Last updated 09/12/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + [Regulatory Compliance in Azure Policy](../../governance/policy/concepts/regulatory-compliance.md) provides Microsoft created and managed initiative definitions, known as _built-ins_, for the **compliance domains** and **security controls** related to different compliance standards. This page lists the **compliance domains** and **security controls** for Azure Database for MySQL. You can assign the built-ins for a **security control** individually to help make your Azure resources compliant with the specific standard.
mysql Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/select-right-deployment-type.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] + With Azure, your MySQL server workloads can run in a hosted virtual machine infrastructure as a service (IaaS) or as a hosted platform as a service (PaaS). PaaS has two deployment options, and there are service tiers within each deployment option. When you choose between IaaS and PaaS, you must decide if you want to manage your database, apply patches, backups, security, monitoring, scaling or if you want to delegate these operations to Azure. When making your decision, consider the following two options:
mysql Single Server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/single-server-overview.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + [Azure Database for MySQL](overview.md) powered by the MySQL community edition is available in two deployment modes: - Flexible Server
mysql Single Server Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/single-server-whats-new.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL is a relational database service in the Microsoft cloud. The service is based on the [MySQL Community Edition](https://www.mysql.com/products/community/) (available under the GPLv2 license) database engine and supports versions 5.6(retired), 5.7, and 8.0. [Azure Database for MySQL - Single Server](./overview.md#azure-database-for-mysqlsingle-server) is a deployment mode that provides a fully managed database service with minimal requirements for customizations of database. The Single Server platform is designed to handle most database management functions such as patching, backups, high availability, and security, all with minimal user configuration and control. This article summarizes new releases and features in Azure Database for MySQL - Single Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
mysql Tutorial Design Database Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-cli.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL is a relational database service in the Microsoft cloud based on MySQL Community Edition database engine. In this tutorial, you use Azure CLI (command-line interface) and other utilities to learn how to: > [!div class="checklist"]
mysql Tutorial Design Database Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-portal.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL is a managed service that enables you to run, manage, and scale highly available MySQL databases in the cloud. Using the Azure portal, you can easily manage your server and design a database. In this tutorial, you use the Azure portal to learn how to:
mysql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-powershell.md
Last updated 06/20/2022
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + Azure Database for MySQL is a relational database service in the Microsoft cloud based on MySQL Community Edition database engine. In this tutorial, you use PowerShell and other utilities to learn how to:
mysql Tutorial Provision Mysql Server Using Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-provision-mysql-server-using-azure-resource-manager-templates.md
[!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] + The [Azure Database for MySQL REST API](/rest/api/mysql/) enables DevOps engineers to automate and integrate provisioning, configuration, and operations of managed MySQL servers and databases in Azure. The API allows the creation, enumeration, management, and deletion of MySQL servers and databases on the Azure Database for MySQL service. Azure Resource Manager leverages the underlying REST API to declare and program the Azure resources required for deployments at scale, aligning with infrastructure as a code concept. The template parameterizes the Azure resource name, SKU, network, firewall configuration, and settings, allowing it to be created one time and used multiple times. Azure Resource Manager templates can be easily created using [Azure portal](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md) or [Visual Studio Code](../../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md?tabs=CLI). They enable application packaging, standardization, and deployment automation, which can be integrated in the DevOps CI/CD pipeline. For instance, if you are looking to quickly deploy a Web App with Azure Database for MySQL backend, you can perform the end-to-end deployment using this [QuickStart template](https://azure.microsoft.com/resources/templates/webapp-managed-mysql/) from the GitHub gallery.
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/whats-happening-to-mysql-single-server.md
+
+ Title: What's happening to Azure Database for MySQL Single Server?
+description: The Azure Database for MySQL Single Server service is being deprecated.
+++++++ Last updated : 09/30/2022++
+# What's happening to Azure Database for MySQL - Single Server?
++
+Hello! We have news to share - **Azure Database for MySQL - Single Server is on the retirement path**.
+
+After years of evolving the Azure Database for MySQL - Single Server service, it can no longer handle all the new features, functions, and security needs. We recommend upgrading to Azure Database for MySQL - Flexible Server.
+
+Azure Database for MySQL - Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. For more information about Flexible Server, visit **[Azure Database for MySQL - Flexible Server](../flexible-server/overview.md)**.
+
+If you currently have an Azure Database for MySQL - Single Server service hosting production servers, we're glad to let you know that you can migrate your Azure Database for MySQL - Single Server servers to the Azure Database for MySQL - Flexible Server service.
+
+However, we know change can be disruptive to any environment, so we want to help you with this transition. Review the different ways using the Azure Data Migration Service to [migrate from Azure Database for MySQL - Single Server to MySQL - Flexible Server.](#migrate-from-single-server-to-flexible-server)
+
+## Migrate from Single Server to Flexible Server
+
+Learn how to migrate from Azure Database for MySQL - Single Server to Azure Database for MySQL - Flexible Server using the Azure Database Migration Service (DMS).
+
+| Scenario | Tool(s) | Details |
+|-|||
+| Offline | Database Migration Service (DMS) and the Azure portal | [Tutorial: DMS with the Azure portal (offline)](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) |
+| Online | Database Migration Service (DMS) and the Azure portal | [Tutorial: DMS with the Azure portal (online)](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) |
+
+For more information on migrating from Single Server to Flexible Server, visit [Select the right tools for migration to Azure Database for MySQL](../migrate/how-to-decide-on-right-migration-tools.md).
+
+## Migration Eligibility
+
+To upgrade to Azure Database for MySQL Flexible Server, it's important to know when you're eligible to migrate your single server. Find the migration eligibility criteria in the below table.
+
+| Single Server configuration not supported for migration | How and when to migrate? |
+||--|
+| Single servers with Private Link enabled | Private Link is on the road map for next year. You can also choose to migrate now and perform wNet injection via a point-in-time restore operation to move to private access network connectivity method. |
+| Single servers with Cross-Region Read Replicas enabled | Cross-Region Read Replicas for flexible servers are on the road map for later this year (for paired region) and next year (for any cross-region), post, which you can migrate your single server. |
+| Single server deployed in regions where flexible server isn't supported (Learn more about regions [here](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?regions=all&products=mysql)). | Azure Database Migration Service (DMS) supports cross-region migration. Deploy your target flexible server in a suitable region and migrate using DMS. |
+
+## Frequently Asked Questions (FAQs)
+
+**Q. Why is Azure Database for MySQL-Single Server being retired?**
+
+A. Azure Database for MySQL ΓÇô Single Server became Generally Available (GA) in 2018. However, given customer feedback and new advancements in the computation, availability, scalability and performance capabilities in the Azure database landscape, the Single Server offering needs to be retired and upgraded with a new architecture ΓÇô Azure Database for MySQL Flexible Server to bring you the best of AzureΓÇÖs open-source database platform.
+
+**Q. Why am I being asked to migrate to Azure Database for MySQL - Flexible Server?**
+
+A. [Azure Database for MySQL - Flexible Server](https://azure.microsoft.com/pricing/details/mysql/flexible-server/#overview) is the best platform for running all your MySQL workloads on Azure. Azure MySQL- Flexible server is both economical and provides better performance across all service tiers and more ways to control your costs, for cheaper and faster disaster recovery:
+
+- More ways to optimize costs, including support for burstable tier compute options.
+- Improved performance for business-critical production workloads that require low latency, high concurrency, fast failover, and high scalability.
+- Improved uptime with the ability to configure a hot standby on the same or a different zone, and a one-hour time window for planned server maintenance.
+
+**Q. How soon do I need to migrate my single server to a flexible server?**
+
+A. Azure Database for MySQL - Single Server is scheduled for retirement by **September 16, 2024**, so we strongly recommend migrating your single server to a flexible server at your earliest opportunity to ensure ample time to run through the migration lifecycle, apply the benefits offered by Flexible Server, and ensure the continuity of your business.
+
+**Q. What happens to my existing Azure Database for MySQL Single Server instances?**
+
+A. Your existing Azure Database for MySQL Single Server workloads will continue to function as before and will be officially supported until the sunset date. However, no new updates will be released for Single Server and we strongly advise you to start migrating to Azure Database for MySQL Flexible Server at the earliest.
+
+**Q. Can I choose to continue running Single Server beyond the sunset date?**
+
+A. Unfortunately, we don't plan to support Single Server beyond the sunset date of **September 16, 2024**, and hence we strongly advise that you start planning your migration as soon as possible.
+
+**Q. After the Single Server retirement announcement, what if I still need to create a new single server to meet my business needs?**
+
+A. We aren't stopping new single server creations immediately, so you can provision new single servers to meet your business needs. However, we strongly recommend that you migrate to Flexible Server at the earliest so that you can start managing your Flexible Server fleet instead.
+
+**Q. Are there additional costs associated with performing the migration?**
+
+A. When running the migration, you pay for the target flexible server and the source single server. The configuration and compute of the target flexible server determines the additional costs incurred. For more information, see, [Pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/). Once you've decommissioned the source single server post successful migration, you only pay for your running flexible server. There are no more costs on running the migration through the migration tooling.
+
+**Q. Will my billing be affected by running Flexible Server as compared to Single Server?**
+
+A. If you select same zone or zone redundant high availability for the target flexible server, your bill will be higher than it was on single server. Same zone or zone redundant high availability requires a hot standby server to be spun up along with storing redundant backup and hence the added cost. This architecture enables reduced downtime during unplanned outages and planned maintenance. In addition, depending on your workload, flexible servers can provide much better performance over single servers, whereby you may be able to run your workload with a lower SKU on flexible servers, and hence your overall cost may be similar to that of a single server.
+
+**Q. Do I need to incur downtime for migrate Single Server to Flexible Server?**
+
+A. To limit any downtime you might incur, perform an online migration to Flexible Server, which provides minimal downtime.
+
+**Q. Will there be future updates to Single Server to support latest MySQL versions?**
+
+A. The last minor version upgrade to Single Server version 8.0 will be 8.0.28. Consider migrating to Flexible Server to use the benefits of the latest version upgrades.
+
+**Q. How does the flexible serverΓÇÖs 99.99% availability SLA differ from that of single server?**
+
+A. Flexible serverΓÇÖs zone-redundant deployment provides 99.99% availability with zonal-level resiliency whereas the single server provides resiliency in a single availability zone. Flexible ServerΓÇÖs High Availability (HA) architecture deploys a warm standby with redundant compute and storage (with each siteΓÇÖs data stored in 3x copies) as compared to single serverΓÇÖs HA architecture, which doesn't have a passive hot standby to help recover from zonal failures. The flexible serverΓÇÖs HA architecture enables reduced downtime during unplanned outages and planned maintenance.
+
+**Q. What migration options are available to help me migrate my single server to a flexible server?**
+
+A. You can use Azure Database Migration Service (DMS) to run [online](https://microsoft.sharepoint.com/teams/mysqlpms/Shared%20Documents/Single%20Server%20Deprecation%20+%20Migration/ΓÇó%09Tutorial:%20Migrate%20Azure%20Database%20for%20MySQL%20-%20Single%20Server%20to%20Flexible%20Server%20online%20using%20DMS%20via%20the%20Azure%20portal%20-%20Azure%20Database%20Migration%20Service%20|%20Microsoft%20Docs) or [offline](https://microsoft.sharepoint.com/teams/mysqlpms/Shared%20Documents/Single%20Server%20Deprecation%20+%20Migration/ΓÇó%09Tutorial:%20Migrate%20Azure%20Database%20for%20MySQL%20-%20Single%20Server%20to%20Flexible%20Server%20offline%20using%20DMS%20via%20the%20Azure%20portal%20-%20Azure%20Database%20Migration%20Service%20|%20Microsoft%20Docs) migrations (recommended). In addition, you can use community tools such as m[ydumper/myloader together with Data-in replication](../migrate/how-to-migrate-single-flexible-minimum-downtime.md) to perform migrations.
+
+**Q. My single server is deployed in a region that doesnΓÇÖt support flexible server. How should I proceed with migration?**
+
+A. Azure Database Migration Service supports cross-region migration, so you can select a suitable region for your target flexible server and then proceed with DMS migration.
+
+**Q. I have private link configured for my single server, and this feature is not currently supported in Flexible Server. How do I migrate?**
+
+A. Flexible Server support for private link is on our road map as our highest priority. Launch of the feature is planned in Q2 2023 and you have ample time to initiate your Single Server to Flexible Server migrations with private link configured. You can also choose to migrate now and perform VNet injection via a point-in-time restore operation to move to private access network connectivity method.
+
+**Q. I have cross-region read replicas configured for my single server, and this feature is not currently supported in Flexible Server. How do I migrate?**
+
+A. Flexible Server support for cross-region read replicas is on our roadmap as our highest priority. Launch of the feature is planned in Q4 2022 (for paired region) and Q1 2023 (for any cross-region), and you have ample time to initiate your Single Server to Flexible Server migrations with cross-region read replicas configured.
+
+**Q. Is there an option to rollback a Single Server to Flexible Server migration?**
+
+A. You can perform any number of test migrations, and after gaining confidence through testing, perform the final migration. A test migration doesnΓÇÖt affect the source single server, which remains operational and continues replicating until you perform the actual migration. If there are any errors during the test migration, you can choose to postpone the final migration and keep your source server running. You can then reattempt the final migration after you resolve the errors. After you've performed a final migration to Flexible Server and the source single server has been shut down, you can't perform a rollback from Flexible Server to Single Server.
+
+**Q. The size of my database is greater than 1 TB, so how should I proceed with an Azure Database Migration Service initiated migration?**
+
+A. To support Azure Database Migration Service (DMS) migrations of databases that are 1 TB+, raise a support ticket with Azure Database Migration Service to scale-up the migration agent to support your 1 TB+ database migrations.
+
+**Q. Is cross-region migration supported?**
+
+A. Azure Database Migration Service supports cross-region migrations, so you can migrate your single server to a flexible server that is deployed in a different region using DMS.
+
+**Q. Is cross-subscription migration supported?**
+
+A. Azure Database Migration Service supports cross-subscription migrations, so you can migrate your single server to a flexible server that deployed on a different subscription using DMS.
+
+**Q. Is cross-resource group subscription supported?**
+
+A. Azure Database Migration Service supports cross-resource group migrations, so you can migrate your single server to a flexible server that is deployed in a different resource group using DMS.
+
+**Q. Is there cross-version support?**
+
+Yes, migration from lower version MySQL servers (v5.6 and above) to higher versions is supported through Azure Database Migration Service migrations.
+
+**Q. I have further questions on retirement. How can I get assistance around it?**
+
+**A.** If you have questions, get answers from community experts in [Microsoft Q&A.](https://aka.ms/microsoft-azure-mysql-qa) If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest): 
+
+1. For *Summary*, type a description of your issue.
+2. For *Issue type*, select **Technical**.
+3. For *Subscription*, select your subscription.
+4. For *Service*, select **My services**.
+5. For *Service type*, select **Azure Database for MySQL Single Server**.**
+6. For *Resource*, select your resource.
+7. For *Problem type*, select **Migration**.
+8. For *Problem subtype*, select **Migrating from single to flexible server**
+
+You can also reach out to the Azure Database for MySQL product team at <AskAzureDBforMySQL@service.microsoft.com>.
+
+> [!Warning]
+> This article is not for Azure Database for MySQL - Flexible Server users. It is for Azure Database for MySQL - Single Server customers who need to upgrade to MySQL - Flexible Server.
+
+Visit the **[FAQ](../../dms/faq-mysql-single-to-flex.md)** for information about using the Azure Database Migration Service (DMS) for Azure Database for MySQL - Single Server to Flexible Server migrations.
+
+We know migrating services can be a frustrating experience, and we apologize in advance for any inconvenience this might cause you. You can choose what scenario best works for you and your environment.
+
+## Next steps
+
+- [Frequently Asked Questions about DMS migrations](../../dms/faq-mysql-single-to-flex.md)
+- [Select the right tools for migration to Azure Database for MySQL](../migrate/how-to-decide-on-right-migration-tools.md)
+- [What is Flexible Server](../flexible-server/overview.md)
network-watcher Network Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-overview.md
Diagnostic Toolkit provides access to all the diagnostic features available for
By default, all networking resources are visible in Network Insights. Customers can click on the resource type for viewing resource health and metrics (if available), subscription details, location, etc. A subset of networking resources have been _Onboarded_. For Onboarded resources, customers have access to a resource specific topology view and a built-in metrics workbook. These out-of-the-box experiences make it easier to explore resource metrics and troubleshoot issues. Resources that have been onboarded are:
-* Virtual WAN
-* Application Gateway
-* Load Balancer
-* ExpressRoute
-* Private Link
-* NAT Gateway
-* Public IP
-* NIC
+- Application Gateway
+- Azure ExpressRoute
+- Azure Firewall
+- Azure Private Link
+- Load Balancer
+- Local Network Gateway
+- Network Interface
+- Network Security Groups
+- Public IP addresses
+- Route Table / UDR
+- Traffic Manager
+- Virtual Network
+- Virtual Network NAT
+- Virtual WAN
+- ER/VPN Gateway
+- Virtual Hub
## Troubleshooting For general troubleshooting guidance, see the dedicated workbook-based insights [troubleshooting article](../azure-monitor/insights/troubleshoot-workbooks.md).
postgresql Quickstart Create Server Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-bicep.md
Previously updated : 05/12/2022 Last updated : 09/21/2022
-# Quickstart: Use a Bicep to create an Azure Database for PostgreSQL - Flexible Server
+# Quickstart: Use a Bicep file to create an Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use [Bicep](../../azure-resource-manager/bicep/overview.md) to provision a PostgreSQL Flexible Server to deploy multiple servers or multiple databases on a server.
+In this quickstart, you'll learn how to use a Bicep file to create an Azure Database for PostgreSQL - Flexible Server.
+
+Flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use Bicep to provision a PostgreSQL Flexible Server to deploy multiple servers or multiple databases on a server.
[!INCLUDE [About Bicep](../../../includes/resource-manager-quickstart-bicep-introduction.md)]
An Azure account with an active subscription. [Create one for free](https://azur
An Azure Database for PostgresSQL Server is the parent resource for one or more databases within a region. It provides the scope for management policies that apply to its databases: login, firewall, users, roles, and configurations.
-Create a _postgres-flexible-server.bicep_ file and copy the following Bicep into it.
+Create a _main.bicep_ file and copy the following Bicep into it.
```bicep param administratorLogin string
These resources are defined in the Bicep file:
## Deploy the Bicep file
-Select **Try it** from the following PowerShell code block to open Azure Cloud Shell.
-
-```azurepowershell-interactive
-$serverName = Read-Host -Prompt "Enter a name for the new Azure Database for PostgreSQL server"
-$resourceGroupName = Read-Host -Prompt "Enter a name for the new resource group where the server will exist"
-$location = Read-Host -Prompt "Enter an Azure region (for example, centralus) for the resource group"
-$adminUser = Read-Host -Prompt "Enter the Azure Database for PostgreSQL server's administrator account name"
-$adminPassword = Read-Host -Prompt "Enter the administrator password" -AsSecureString
+Use Azure CLI or Azure PowerShell to deploy the Bicep file.
-New-AzResourceGroup -Name $resourceGroupName -Location $location # Use this command when you need to create a new resource group for your deployment
-New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName `
- -TemplateFile "postgres-flexible-server.bicep" `
- -serverName $serverName `
- -administratorLogin $adminUser `
- -administratorLoginPassword $adminPassword
+# [CLI](#tab/CLI)
-Read-Host -Prompt "Press [ENTER] to continue ..."
+```azurecli
+az group create --name exampleRG --location centralus
+az deployment group create --resource-group exampleRG --template-file main.bicep
```
-## Review deployed resources
-
-Follow these steps to verify if your server was created in Azure.
-
-# [Azure portal](#tab/portal)
+# [PowerShell](#tab/PowerShell)
+```azurepowershell
+New-AzResourceGroup -Name "exampleRG" -Location "centralus"
+New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile "./main.bicep"
+```
-1. In the [Azure portal](https://portal.azure.com), search for and select **Azure Database for PostgreSQL Flexible Servers**.
-1. In the database list, select your new server to view the **Overview** page to manage the server.
+
-# [PowerShell](#tab/PowerShell)
+You'll be prompted to enter these values:
+- **serverName**: enter a name for the PostgreSQL server.
+- **administratorLogin**: enter the Azure Database for PostgreSQL server's administrator account name.
+- **administratorLoginPassword**: enter the administrator password.
-You'll have to enter the name of the new server to view the details of your Azure Database for PostgreSQL Flexible server.
+## Review deployed resources
-```azurepowershell-interactive
-$serverName = Read-Host -Prompt "Enter the name of your Azure Database for PostgreSQL server"
-Get-AzResource -ResourceType "Microsoft.DBforPostgreSQL/flexibleServers" -Name $serverName | ft
-Write-Host "Press [ENTER] to continue..."
-```
+Use the Azure portal, Azure CLI, or Azure PowerShell to validate the deployment and review the deployed resources.
# [CLI](#tab/CLI)
+```azurecli
+az resource list --resource-group exampleRG
+```
-You'll have to enter the name and the resource group of the new server to view details about your Azure Database for PostgreSQL Flexible Server.
+# [PowerShell](#tab/PowerShell)
-```azurecli-interactive
-echo "Enter your Azure Database for PostgreSQL Flexible Server name:" &&
-read serverName &&
-echo "Enter the resource group where the Azure Database for PostgreSQL Flexible Server exists:" &&
-read resourcegroupName &&
-az resource show --resource-group $resourcegroupName --name $serverName --resource-type "Microsoft.DBforPostgreSQL/flexibleServers"
+```azurepowershell
+Get-AzResource -ResourceGroupName exampleRG
```
Keep this resource group, server, and single database if you want to go to the [
To delete the resource group:
-# [Portal](#tab/azure-portal)
--
-In the [portal](https://portal.azure.com), select the resource group you want to delete.
-
-1. Select **Delete resource group**.
-1. To confirm the deletion, type the name of the resource group
-
-# [PowerShell](#tab/azure-powershell)
-
+# [CLI](#tab/CLI)
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name ExampleResourceGroup
+```azurecli
+az group delete --name exampleRG
```
-# [Azure CLI](#tab/azure-cli)
-
+# [PowerShell](#tab/PowerShell)
-```azurecli-interactive
-az group delete --name ExampleResourceGroup
+```azurepowershell
+Remove-AzResourceGroup -Name exampleRG
```
purview Manage Kafka Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-kafka-dotnet.md
ms.devlang: csharp Previously updated : 06/12/2022 Last updated : 09/29/2022
We'll use Azure Storage as the checkpoint store. Use the following steps to crea
} ```
-> [!IMPORTANT]
-> Atlas currently supports the following operation types: **ENTITY_CREATE_V2**, **ENTITY_PARTIAL_UPDATE_V2**, **ENTITY_FULL_UPDATE_V2**, **ENTITY_DELETE_V2**. Pushing messages to Purview is currently enabled by default. If your scenario involves reading from Purview contact us, as it needs to be allow-listed. You'll need to provide your subscription id and the name of Purview account.
- ## Next steps Check out more examples in GitHub.
purview Tutorial Purview Audit Logs Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-purview-audit-logs-diagnostics.md
Previously updated : 02/10/2022 Last updated : 09/28/2022 # Audit logs, diagnostics, and activity history
More types and categories of activity audit events will be added.
| Category | Activity | Operation | |||--|
+| Management | Collections | Create |
+| Management | Collections | Update |
+| Management | Collections | Delete |
+| Management | Role assignments | Create |
+| Management | Role assignments | Update |
+| Management | Role assignments | Delete |
| Management | Scan rule set | Create | | Management | Scan rule set | Update | | Management | Scan rule set | Delete |
sentinel Aws S3 Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/aws-s3-troubleshoot.md
+
+ Title: Troubleshoot AWS S3 connector issues - Microsoft Sentinel
+description: Troubleshoot AWS S3 connector issues in Microsoft Sentinel.
+++ Last updated : 09/08/2022
+#Customer intent: As a security operator, I want to quickly identify the cause of the problem occurring with the AWS S3 connector so I can find the steps needed to resolve the problem.
++
+# Troubleshoot AWS S3 connector issues
+
+The Amazon Web Services (AWS) S3 connector allows you to ingest AWS service logs, collected in AWS S3 buckets, to Microsoft Sentinel. The types of logs we currently support are AWS CloudTrail, VPC Flow Logs, and AWS GuardDuty.
+
+This article describes how to quickly identify the cause of issues occurring with the AWS S3 connector so you can find the steps needed to resolve the issues.
+
+Learn how to [connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data](connect-aws.md?tabs=s3).
+
+## Microsoft Sentinel doesnΓÇÖt receive data from the Amazon Web Services S3 connector or one of its data types
+
+The logs for the AWS S3 connector (or one of its data types) arenΓÇÖt visible in the Microsoft Sentinel workspace for more than 30 minutes after the connector was connected.
+
+Before you search for a cause and solution, review these considerations:
+
+- It can take around 20-30 minutes from the moment the connector is connected until data is ingested into the workspace.
+- The connector's connection status indicates that a collection rule exists; it doesn't indicate that data was ingested. If the status of the Amazon Web Services S3 connector is green, there's a collection rule for one of the data types, but still no data.
+
+### Determine the cause of your problem
+
+In this section, we cover these causes:
+
+1. The AWS S3 connector permissions policies aren't set properly.
+1. The data isn't ingested to the S3 bucket in AWS.
+1. The Amazon Simple Queue Service (SQS) in the AWS cloud doesn't receive notifications from the S3 bucket.
+1. The data cannot be read from the SQS/S3 in the AWS cloud. With GuardDuty logs, the issue is caused by wrong KMS permissions.
+
+### Cause 1: The AWS S3 connector permissions policies aren't set properly
+
+This issue is caused by incorrect permissions in the AWS environment.
+
+### Create permissions policies
+
+You need permissions policies to deploy the AWS S3 data connector. Review the [required permissions](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AWS-S3/AwsRequiredPolicies.md) and set the relevant permissions.
+
+## Cause 2: The relevant data doesn't exist in the S3 bucket
+
+The relevant logs don't exist in the S3 bucket.
+
+### Solution: Search for logs and export logs if needed
+
+1. In AWS, open the S3 bucket, search for the relevant folder according to the required logs, and check if there are any log files inside the folder.
+1. If the data doesn't exist, thereΓÇÖs an issue with the AWS configuration. In this case, you need to [configure an AWS service to export logs to an S3 bucket](connect-aws.md?tabs=s3#configure-an-aws-service-to-export-logs-to-an-s3-bucket).
+
+### Cause 3: The S3 data didn't arrive at the SQS
+
+The data wasn't successfully transferred from S3 to the SQS.
+
+### Solution: Verify that the data arrived and configure event notifications
+
+1. In AWS, open the relevant SQS.
+1. In the **Monitoring** tab, you should see traffic in the **Number Of Messages Sent** widget. If there's no traffic in the SQS, there's an AWS configuration problem.
+1. Make sure that the event notifications definition for the SQS includes the correct data filters (prefix and suffix).
+ 1. To see the event notifications, in the S3 bucket, select the **Properties** tab, and locate the **Event notifications** section.
+ 1. If you canΓÇÖt see this section, create it.
+ 1. Make sure that the SQS has the relevant policies to get the data from the S3 bucket. The SQS must contain this policy in the **Access policy** tab.
+
+### Cause 4: The SQS didn't read the data
+
+The SQS didn't successfully read the S3 data.
+
+### Solution: Verify that the SQS reads the data
+
+1. In AWS, open the relevant SQS.
+1. In the **Monitoring** tab, you should see traffic in the **Number Of Messages Deleted** and **Number Of Messages Received** widgets.
+1. One spike of data isn't enough. Wait until there's enough data (several spikes), and then check for issues.
+1. If at least one of the widgets is empty, check the health logs by running this query:
+
+ ```kusto
+ SentinelHealth
+ | where TimeGenerated > ago(1d)
+ | where SentinelResourceKind in ('AmazonWebServicesCloudTrail', 'AmazonWebServicesS3')
+ | where OperationName == 'Data fetch failure summary'
+ | mv-expand TypeOfFailureDuringHour = ExtendedProperties["FailureSummary"]
+ | extend StatusCode = TypeOfFailureDuringHour["StatusCode"]
+ | extend StatusMessage = TypeOfFailureDuringHour["StatusMessage"]
+ | project SentinelResourceKind, SentinelResourceName, StatusCode, StatusMessage, SentinelResourceId, TypeOfFailureDuringHour, ExtendedProperties
+ ```
+1. Make sure that the health feature is enabled:
+ ```kusto
+ SentinelHealth
+ | take 20
+ ```
+1. If the health feature isnΓÇÖt enabled, [enable it](monitor-sentinel-health.md).
+
+## Data from the AWS S3 connector (or one of its data types) is seen in Microsoft Sentinel with a delay of more than 30 minutes
+
+This issue usually happens when Microsoft canΓÇÖt read files in the S3 folder. Microsoft can't read the files because they're either encrypted or in the wrong format. In these cases, many retries eventually cause ingestion delay.
+
+### Determine the cause of your problem
+
+In this section, we cover these causes:
+- Log encryption isn't set up correctly
+- Event notifications aren't defined correctly
+- Health errors or health disabled
+
+### Cause 1: Log encryption isn't set up correctly
+
+If the logs are fully or partially encrypted by the Key Management Service (KMS), Microsoft Sentinel might not have permission for this KMS to decrypt the files.
+
+### Solution: Check log encryption
+
+Make sure that Microsoft Sentinel has permission for this KMS to decrypt the files. Review the [required KMS permissions](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AWS-S3/AwsRequiredPolicies.md#sqs-policy) for the GuardDuty and CloudTrail logs.
+
+### Cause 2: Event notifications aren't configured correctly
+
+When you configure an Amazon S3 event notification, you must specify which supported event types Amazon S3 should send the notification to. If an event type that you didn't specify exists in your Amazon S3 bucket, Amazon S3 doesn't send the notification.
+
+### Solution: Verify that event notifications are defined properly
+
+To verify that the event notifications from S3 to the SQS are defined properly, check that:
+
+- The notification is defined from the specific folder that includes the logs, and not from the main folder that contains the bucket.
+- The notification is defined with the *.gz* suffix. For example:
+
+### Cause 3: Health errors or health disabled
+
+There might be errors in the health logs, or the health feature might not be enabled.
+
+### Solution: Verify that there are no errors in the health logs and enable health
+
+1. Verify that there are no errors in the health logs by running this query:
+
+ ```kusto
+ SentinelHealth
+ | where TimeGenerated between (ago(startTime)..ago(endTime))
+ | where SentinelResourceKind == "AmazonWebServicesS3"
+ | where Status != "Success"
+ | distinct TimeGenerated, OperationName, SentinelResourceName, Status, Description
+ ```
+1. Make sure that the health feature is enabled:
+
+ ```kusto
+ SentinelHealth
+ | take 20
+ ```
+
+1. If the health feature isnΓÇÖt enabled, [enable it](monitor-sentinel-health.md).
+
+## Next steps
+
+In this article, you learned how to quickly identify causes and resolve common issues with the AWS S3 connector.
+
+We welcome feedback, suggestions, requests for features, bug reports or improvements and additions. Go to the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel) to create an issue or fork and upload a contribution.
sentinel Connect Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-aws.md
Permissions policies that must be applied to the [Microsoft Sentinel role you cr
- Similarly, a single SQS queue can serve only one path in an S3 bucket, so if for any reason you are storing logs in multiple paths, each path requires its own dedicated SQS queue.
-### Troubleshooting steps
+### Troubleshooting
-1. **Verify that log data exists in your S3 bucket.**
-
- View the S3 bucket dashboard and verify that data is flowing to it. If not, check that you have set up the AWS service correctly.
-
-1. **Verify that messages are arriving in the SQS queue.**
-
- View the AWS SQS queue dashboard - under the Monitoring tab, you should see traffic in the "Number Of Messages Sent" graph widget. If you see no traffic, check that S3 PUT object notification is configured correctly.
-
-1. **Verify that messages are being read from the SQS queue.**
-
- Check the "Number of Messages Received" and "Number of Messages Deleted" widgets in the queue dashboard. If there are no notifications under messages deleted," then check health messages. It's possible that some permissions are missing. Check your IAM configurations.
-
-For more information, see [Monitor the health of your data connectors](monitor-data-connector-health.md).
-
-Learn how to [troubleshoot Amazon Web Services S3 connector issues](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/troubleshoot-amazon-web-services-s3-connector-issues/ba-p/3608072).
+Learn how to [troubleshoot Amazon Web Services S3 connector issues](aws-s3-troubleshoot.md).
# [CloudTrail connector (legacy)](#tab/ct)
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md
Last updated 04/27/2022
This article details the security content available for the Microsoft Sentinel Solution for SAP. > [!IMPORTANT]
-> Some components of the Microsoft Sentinel Solution for SAP are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> While the Microsoft Sentinel Solution for SAP is in GA, some specific components remain in PREVIEW. This article indicates the components that are in preview in the relevant sections below. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> Available security content includes built-in workbooks and analytics rules. You can also add SAP-related [watchlists](../watchlists.md) to use in your search, detection rules, threat hunting, and response playbooks.
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md
Yes. Site Recovery supports disaster recovery of VMs that have Azure Disk Encryp
- Site Recovery supports ADE for Azure VMs running Windows. - Site Recovery supports: - ADE version 0.1, which has a schema that requires Azure Active Directory (Azure AD).
- - ADE version 1.1, which doesn't require Azure AD. For version 1.1, Windows Azure VMs must have managed disks.
+ - ADE version 1.1, which doesn't require Azure AD. For version 1.1, Microsoft Azure VMs must have managed disks.
- [Learn more](../virtual-machines/extensions/azure-disk-enc-windows.md#extension-schema) about the extension schemas. [Learn more](azure-to-azure-how-to-enable-replication-ade-vms.md) about enabling replication for encrypted VMs.
See the [support matrix](azure-to-azure-support-matrix.md#replicated-machines
### Can I select an automation account from a different resource group?
-When you allow Site Recovery to manage updates for the Mobility service extension running on replicated Azure VMs, it deploys a global runbook (used by Azure services), via an Azure automation account. You can use the automation account that Site Recovery creates, or select to use an existing automation account.
+When you allow Site Recovery to manage updates for the Mobility service extension running on replicated Azure VMs, it deploys a global runbook (used by Azure services), via an Azure Automation account. You can use the automation account that Site Recovery creates, or select to use an existing automation account.
Currently, in the portal, you can only select an automation account in the same resource group as the vault. You can select an automation account from a different resource group using PowerShell. [Learn more](azure-to-azure-autoupdate.md#enable-automatic-updates) about enabling automatic updates.
No, this is unsupported. If you accidentally move storage accounts to a differen
A replication policy defines the retention history of recovery points, and the frequency of app-consistent snapshots. Site Recovery creates a default replication policy as follows: -- Retain recovery points for 1 day.
+- Retain recovery points for one day.
- App-consistent snapshots are disabled and are not created by default.
-[Learn more](azure-to-azure-how-to-enable-replication.md#customize-target-resources) about replication settings.
+[Learn more](azure-to-azure-how-to-enable-replication.md) about replication settings.
### What's a crash-consistent recovery point?
So, for the recent two hours, you can choose from 24 crash-consistent points, an
### How far back can I recover?
-The oldest recovery point that you can use is 15 days with Managed disk and 3 days with Unmanaged disk.
+The oldest recovery point that you can use is 15 days with Managed disk and three days with Unmanaged disk.
### How does the pruning of recovery points happen?
The first recovery point that's generated has the complete copy. Successive reco
### Do increases in recovery point retention increase storage costs?
-Yes. For example, if you increase retention from 1 day to 3 days, Site Recovery saves recovery points for an additional two days.The added time incurs storage changes. Earlier, it was saving recovery points per hour for 1 day. Now, it is saving recovery points per two hours for 3 days. Refer [pruning of recovery points](#how-does-the-pruning-of-recovery-points-happen). So additional 12 recovery points are saved. As an example only, if a single recovery point had delta changes of 10 GB, with a per-GB cost of $0.16 per month, then additional charges would be $1.60 × 12 per month.
+Yes. For example, if you increase retention from one day to three days, Site Recovery saves recovery points for an additional two days. The added time incurs storage changes. Earlier, it was saving recovery points per hour for 1 day. Now, it is saving recovery points per two hours for 3 days. Refer [pruning of recovery points](#how-does-the-pruning-of-recovery-points-happen). So additional 12 recovery points are saved. As an example only, if a single recovery point had delta changes of 10 GB, with a per-GB cost of $0.16 per month, then additional charges would be $1.60 × 12 per month.
## Multi-VM consistency
Yes. Site Recovery processes all pending data before failing over, so this optio
The *Latest processed* option does the following:
-1. It fails over all VMs to the latest recovery point processed by Site Recovery . This option provides a low RTO, because no time is spent processing unprocessed data.
+1. It fails over all VMs to the latest recovery point processed by Site Recovery. This option provides a low RTO, because no time is spent processing unprocessed data.
### What if there's an unexpected outage in the primary region?
No. When you fail over VMs from one region to another, the VMs start up in the t
### When I reprotect, is all data replicated from the secondary region to primary?
-It depends.If the source region VM exists, then only changes between the source disk and the target disk are synchronized. Site Recovery compares the disks to what's different, and then it transfers the data. This process usually takes a few hours. [Learn more](azure-to-azure-how-to-reprotect.md#what-happens-during-reprotection).
+It depends. If the source region VM exists, then only changes between the source disk and the target disk are synchronized. Site Recovery compares the disks to what's different, and then it transfers the data. This process usually takes a few hours. [Learn more](azure-to-azure-how-to-reprotect.md#what-happens-during-reprotection).
### How long does it take fail back?
site-recovery Azure To Azure How To Enable Replication Ade Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-ade-vms.md
Previously updated : 08/08/2019 Last updated : 09/05/2022
This article describes how to replicate Azure VMs with Azure Disk Encryption (ADE) enabled, from one Azure region to another. >[!NOTE]
-> Site Recovery currently supports ADE, with and without Azure Active Directory (AAD) for VMs running Windows operating systems. For Linux operating systems, we only support ADE without AAD. Moreover, for machines running ADE 1.1 (without AAD), the VMs must be using managed disks. VMs with unmanaged disks aren't supported. If you switch from ADE 0.1 (with AAD) to 1.1 , you need to disable replication and enable replication for a VM after enabling 1.1.
+> Site Recovery currently supports ADE, with and without Azure Active Directory (Azure AD) for VMs running Windows operating systems. For Linux operating systems, we only support ADE without Azure AD. Moreover, for machines running ADE 1.1 (without Azure AD), the VMs must be using managed disks. VMs with unmanaged disks aren't supported. If you switch from ADE 0.1 (with Azure AD) to 1.1, you need to disable replication and enable replication for a VM after enabling 1.1.
## <a id="required-user-permissions"></a> Required user permissions
To troubleshoot permissions, refer to [key vault permission issues](#trusted-roo
## Enable replication
-For this example, the primary Azure region is East Asia, and the secondary region is South East Asia.
+Use the following procedure to replicate Azure Disk Encryption-enabled VMs to another Azure region. As an example, primary Azure region is East Asia, and the secondary is Southeast Asia.
-1. In the vault, select **+Replicate**.
-2. Note the following fields.
- - **Source**: The point of origin of the VMs, which in this case is **Azure**.
- - **Source location**: The Azure region where you want to protect your virtual machines. For this example, the source location is "East Asia."
- - **Deployment model**: The Azure deployment model of the source machines.
- - **Source subscription**: The subscription to which your source virtual machines belong. It can be any subscription that's in the same Azure Active Directory tenant as your recovery services vault.
- - **Resource Group**: The resource group to which your source virtual machines belong. All the VMs in the selected resource group are listed for protection in the next step.
-
-3. In **Virtual Machines** > **Select virtual machines**, select each VM that you want to replicate. You can only select machines for which replication can be enabled. Then, select **OK**.
-
-4. In **Settings**, you can configure the following target-site settings.
-
- - **Target location**: The location where your source virtual machine data will be replicated. Site Recovery provides a list of suitable target regions based on the selected machine's location. We recommend that you use the same location as the Recovery Services vault's location.
- - **Target subscription**: The target subscription that's used for disaster recovery. By default, the target subscription is the same as the source subscription.
- - **Target resource group**: The resource group to which all your replicated virtual machines belong. By default, Site Recovery creates a new resource group in the target region. The name gets the "asr" suffix. If a resource group already exists that was created by Azure Site Recovery, it's reused. You can also choose to customize it, as shown in the following section. The location of the target resource group can be any Azure region except the region where the source virtual machines are hosted.
- - **Target virtual network**: By default, Site Recovery creates a new virtual network in the target region. The name gets the "asr" suffix. It's mapped to your source network and used for any future protection. [Learn more](./azure-to-azure-network-mapping.md) about network mapping.
- - **Target storage accounts (if your source VM doesn't use managed disks)**: By default, Site Recovery creates a new target storage account by mimicking your source VM storage configuration. If a storage account already exists, it's reused.
- - **Replica managed disks (if your source VM uses managed disks)**: Site Recovery creates new replica managed disks in the target region to mirror the source VM's managed disks of the same storage type (standard or premium) as the source VM's managed disks.
- - **Cache storage accounts**: Site Recovery needs an extra storage account called *cache storage* in the source region. All the changes on the source VMs are tracked and sent to the cache storage account. They're then replicated to the target location.
- - **Availability set**: By default, Site Recovery creates a new availability set in the target region. The name has the "asr" suffix. If an availability set that was created by Site Recovery already exists, it's reused.
- - **Disk encryption key vaults**: By default, Site Recovery creates a new key vault in the target region. It has an "asr" suffix that's based on the source VM disk encryption keys. If a key vault that was created by Azure Site Recovery already exists, it's reused.
- - **Key encryption key vaults**: By default, Site Recovery creates a new key vault in the target region. The name has an "asr" suffix that's based on the source VM key encryption keys. If a key vault created by Azure Site Recovery already exists, it's reused.
- - **Replication policy**: Defines the settings for recovery point retention history and app-consistent snapshot frequency. By default, Site Recovery creates a new replication policy with default settings of *24 hours* for recovery point retention and *60 minutes* for app-consistent snapshot frequency.
-
-## Customize target resources
-
-Follow these steps to modify the Site Recovery default target settings.
+1. In the vault > **Site Recovery** page, under **Azure virtual machines**, select **Enable replication**.
+1. In the **Enable replication** page, under **Source**, do the following:
+ - **Region**: Select the Azure region where you want to protect your virtual machines.
+ For example, the source location is *East Asia*.
+ - **Subscription**: Select the subscription to which your source virtual machines belong. This can be any subscription that's in the same Azure Active Directory tenant as your recovery services vault.
+ - **Resource group**: Select the resource group to which your source virtual machines belong. All the VMs in the selected resource group are listed for protection in the next step.
+ - **Virtual machine deployment model**: Select the Azure deployment model of the source machines.
+ - **Disaster recovery between availability zones**: Select **Yes** if you want to perform zonal disaster recovery on virtual machines.
+
+ :::image type="fields needed to configure replication" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/source.png" alt-text="Screenshot that highlights the fields needed to configure replication.":::
+
+1. Select **Next**.
+1. In **Virtual machines**, select each VM that you want to replicate. You can only select machines for which replication can be enabled. You can select up to ten VMs. Then, select **Next**.
+
+ :::image type="Virtual machine selection" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/virtual-machine-selection.png" alt-text="Screenshot that highlights where you select virtual machines.":::
+
+1. In **Replication settings**, you can configure the following settings:
+ 1. Under **Location and Resource group**,
+ - **Target location**: Select the location where your source virtual machine data must be replicated. Depending on the location of selected machines, Site Recovery will provide you the list of suitable target regions. We recommend that you keep the target location the same as the Recovery Services vault location.
+ - **Target subscription**: Select the target subscription used for disaster recovery. By default, the target subscription will be same as the source subscription.
+ - **Target resource group**: Select the resource group to which all your replicated virtual machines belong.
+ - By default, Site Recovery creates a new resource group in the target region with an *asr* suffix in the name.
+ - If the resource group created by Site Recovery already exists, it's reused.
+ - You can customize the resource group settings.
+ - The location of the target resource group can be any Azure region, except the region in which the source VMs are hosted.
+
+ >[!Note]
+ > You can also create a new target resource group by selecting **Create new**.
+
+ :::image type="Location and resource group" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/resource-group.png" alt-text="Screenshot of Location and resource group.":::
+
+ 1. Under **Network**,
+ - **Failover virtual network**: Select the failover virtual network.
+ >[!Note]
+ > You can also create a new failover virtual network by selecting **Create new**.
+ - **Failover subnet**: Select the failover subnet.
+
+ :::image type="Network" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/network.png" alt-text="Screenshot of Network.":::
+
+ 1. **Storage**: Select **View/edit storage configuration**. **Customize target settings** page opens.
+
+ :::image type="Storage" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/storage.png" alt-text="Screenshot of Storage.":::
+
+ - **Replica-managed disk**: Site Recovery creates new replica-managed disks in the target region to mirror the source VM's managed disks with the same storage type (Standard or premium) as the source VM's managed disk.
+ - **Cache storage**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard.
+
+ 1. **Availability options**: Select appropriate availability option for your VM in the target region. If an availability set that was created by Site Recovery already exists, it's reused. Select **View/edit availability options** to view or edit the availability options.
+ >[!NOTE]
+ >- While configuring the target availability sets, configure different availability sets for differently sized VMs.
+ >- You cannot change the availability type - single instance, availability set or availability zone, after you enable replication. You must disable and enable replication to change the availability type.
+
+ :::image type="Availability option" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/availability-option.png" alt-text="Screenshot of availability option.":::
+
+ 1. **Capacity reservation**: Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group or use an existing one. For more information, see [how capacity reservation works](https://learn.microsoft.com/azure/virtual-machines/capacity-reservation-overview).
+ Select **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM will be created in the assigned Capacity Reservation Group.
+
+ :::image type="Capacity reservation" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/capacity-reservation.png" alt-text="Screenshot of capacity reservation.":::
+
+ 1. **Encryption settings**: Select **View/edit configuration** to configure the Disk Encryption and Key Encryption key Vaults.
+ - **Disk encryption key vaults**: By default, Site Recovery creates a new key vault in the target region. It has an *asr* suffix that's based on the source VM disk encryption keys. If a key vault that was created by Azure Site Recovery already exists, it's reused.
+ - **Key encryption key vaults**: By default, Site Recovery creates a new key vault in the target region. The name has an *asr* suffix that's based on the source VM key encryption keys. If a key vault created by Azure Site Recovery already exists, it's reused.
+
+ :::image type="Encryption settings" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/encryption-settings.png" alt-text="Screenshot of encryption settings.":::
-1. Select **Customize** next to "Target subscription" to modify the default target subscription. Select the subscription from the list of subscriptions that are available in the Azure AD tenant.
+1. Select **Next**.
+1. In **Manage**, do the following:
+ 1. Under **Replication policy**,
+ - **Replication policy**: Select the replication policy. Defines the settings for recovery point retention history and app-consistent snapshot frequency. By default, Site Recovery creates a new replication policy with default settings of 24 hours for recovery point retention.
+ - **Replication group**: Create replication group to replicate VMs together to generate Multi-VM consistent recovery points. Note that enabling multi-VM consistency can impact workload performance and should only be used if machines are running the same workload and you need consistency across multiple machines.
+ 1. Under **Extension settings**,
+ - Select **Update settings** and **Automation account**.
+
+ :::image type="manage" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/manage.png" alt-text="Screenshot that displays the manage tab.":::
-2. Select **Customize** next to "Resource group, Network, Storage, and Availability sets" to modify the following default settings:
- - For **Target resource group**, select the resource group from the list of resource groups in the target location of the subscription.
- - For **Target virtual network**, select the network from a list of virtual networks in the target location.
- - For **Availability set**, you can add availability set settings to the VM, if they're part of an availability set in the source region.
- - For **Target Storage accounts**, select the account to use.
-2. Select **Customize** next to "Encryption settings" to modify the following default settings:
- - For **Target disk encryption key vault**, select the target disk encryption key vault from the list of key vaults in the target location of the subscription.
- - For **Target key encryption key vault**, select the target key encryption key vault from the list of key vaults in the target location of the subscription.
+1. Select **Next**.
+1. In **Review**, review the VM settings and select **Enable replication**.
-3. Select **Create target resource** > **Enable Replication**.
-4. After the VMs are enabled for replication, you can check the VMs' health status under **Replicated items**.
+ :::image type="review" source="./media/azure-to-azure-how-to-enable-replication-ade-vms/review.png" alt-text="Screenshot that displays the review tab.":::
>[!NOTE] >During initial replication, the status might take some time to refresh, without apparent progress. Click **Refresh** to get the latest status.
site-recovery Azure To Azure How To Enable Replication Cmk Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-cmk-disks.md
You must create the Disk Encryption set(s) in the target region for the target s
## Enable replication
-For this example, the primary Azure region is East Asia, and the secondary region is South East Asia.
-
-1. In the vault, select **+Replicate**.
-2. Note the following fields.
- - **Source**: The point of origin of the VMs, which in this case is **Azure**.
- - **Source location**: The Azure region where you want to protect your virtual machines. For this example, the source location is "East Asia."
- - **Deployment model**: The Azure deployment model of the source machines.
- - **Source subscription**: The subscription to which your source virtual machines belong. It can be any subscription that's in the same Azure Active Directory tenant as your recovery services vault.
- - **Resource Group**: The resource group to which your source virtual machines belong. All the VMs in the selected resource group are listed for protection in the next step.
-
-3. In **Virtual Machines** > **Select virtual machines**, select each VM that you want to replicate. You can only select machines for which replication can be enabled. Then, select **OK**.
-
-4. In **Settings**, you can configure the following target-site settings.
-
- - **Target location**: The location where your source virtual machine data will be replicated to. We recommend that you use the same location as the Recovery Services vault's location.
- - **Target subscription**: The target subscription that's used for disaster recovery. By default, the target subscription is the same as the source subscription.
- - **Target resource group**: The resource group to which all your replicated virtual machines belong. By default, Site Recovery creates a new resource group in the target region. The name gets the `asr` suffix. If a resource group already exists that was created by Azure Site Recovery, it's reused. You can also choose to customize it, as shown in the following section. The location of the target resource group can be any Azure region except the region where the source virtual machines are hosted.
- - **Target virtual network**: By default, Site Recovery creates a new virtual network in the target region. The name gets the `asr` suffix. It's mapped to your source network and used for any future protection. [Learn more](./azure-to-azure-network-mapping.md) about network mapping.
- - **Target storage accounts (if your source VM doesn't use managed disks)**: By default, Site Recovery creates a new target storage account by mimicking your source VM storage configuration. If a storage account already exists, it's reused.
- - **Replica managed disks (if your source VM uses managed disks)**: Site Recovery creates new replica managed disks in the target region to mirror the source VM's managed disks of the same storage type (standard or premium) as the source VM's managed disks.
- - **Cache storage accounts**: Site Recovery needs an extra storage account called *cache storage* in the source region. All the changes on the source VMs are tracked and sent to the cache storage account. They're then replicated to the target location.
- - **Availability set**: By default, Site Recovery creates a new availability set in the target region. The name has the `asr` suffix. If an availability set that was created by Site Recovery already exists, it's reused.
- - **Disk encryption sets (DES)**: Site Recovery needs the disk encryption set(s) to be used for replica and target managed disks. You must pre-create DES in the target subscription and the target region before enabling the replication. By default, a DES is not selected. You must click on ΓÇÿCustomizeΓÇÖ to choose a DES per source disk.
- - **Replication policy**: Defines the settings for recovery point retention history and app-consistent snapshot frequency. By default, Site Recovery creates a new replication policy with default settings of *24 hours* for recovery point retention and *60 minutes* for app-consistent snapshot frequency.
-
- ![Enable Replication for machine with CMK enabled disks](./media/azure-to-azure-how-to-enable-replication-cmk-disks/cmk-enable-dr.png)
-
-## Customize target resources
-
-Follow these steps to modify the Site Recovery default target settings.
-
-1. Select **Customize** next to "Target subscription" to modify the default target subscription. Select the subscription from the list of subscriptions that are available in the Azure AD tenant.
-
-2. Select **Customize** next to "Resource group, Network, Storage, and Availability sets" to modify the following default settings:
- - For **Target resource group**, select the resource group from the list of resource groups in the target location of the subscription.
- - For **Target virtual network**, select the network from a list of virtual networks in the target location.
- - For **Availability set**, you can add availability set settings to the VM, if they're part of an availability set in the source region.
- - For **Target Storage accounts**, select the account to use.
-
-3. Select **Customize** next to "Storage encryption settings" to select the target DES for every customer-managed key (CMK) enabled source managed disk. At the time of selection, you will also be able to see which target key vault the DES is associated with.
-
-4. Select **Create target resource** > **Enable Replication**.
-5. After the VMs are enabled for replication, you can check the VMs' health status under **Replicated items**.
-
-![Screenshot that shows where to check the VMs' health status.](./media/azure-to-azure-how-to-enable-replication-cmk-disks/cmk-customize-target-disk-properties.png)
+Use the following procedure to replicate machines with Customer-Managed Keys (CMK) enabled disks.
+As an example, the primary Azure region is East Asia, and the secondary region is South East Asia.
+
+1. In the vault > **Site Recovery** page, under **Azure virtual machines**, select **Enable replication**.
+1. In the **Enable replication** page, under **Source**, do the following:
+ - **Region**: Select the Azure region from where you want to protect your VMs.
+ For example, the source location is *East Asia*.
+ - **Subscription**: Select the subscription to which your source VMs belong. This can be any subscription within the same Azure Active Directory tenant where your recovery services vault exists.
+ - **Resource group**: Select the resource group to which your source virtual machines belong. All the VMs in the selected resource group are listed for protection in the next step.
+ - **Virtual machine deployment model**: Select Azure deployment model of the source machines.
+ - **Disaster recovery between availability zones**: Select **Yes** if you want to perform zonal disaster recovery on virtual machines.
+
+ :::image type="fields needed to configure replication" source="./media/azure-to-azure-how-to-enable-replication-cmk-disks/source.png" alt-text="Screenshot that highlights the fields needed to configure replication.":::
+1. Select **Next**.
+1. In **Virtual machines**, select each VM that you want to replicate. You can only select machines for which replication can be enabled. You can select up to ten VMs. Then select **Next**.
+
+ :::image type="Virtual machine selection" source="./media/azure-to-azure-how-to-enable-replication-cmk-disks/virtual-machine-selection.png" alt-text="Screenshot that highlights where you select virtual machines.":::
+
+1. In **Replication settings**, you can configure the following settings:
+ 1. Under **Location and Resource group**,
+ - **Target location**: Select the location where your source virtual machine data must be replicated. Depending on the location of selected machines, Site Recovery will provide you the list of suitable target regions. We recommend that you keep the target location the same as the Recovery Services vault location.
+ - **Target subscription**: Select the target subscription used for disaster recovery. By default, the target subscription will be same as the source subscription.
+ - **Target resource group**: Select the resource group to which all your replicated virtual machines belong.
+ - By default, Site Recovery creates a new resource group in the target region with an *asr* suffix in the name.
+ - If the resource group created by Site Recovery already exists, it's reused.
+ - You can customize the resource group settings.
+ - The location of the target resource group can be any Azure region, except the region in which the source VMs are hosted.
+
+ >[!Note]
+ > You can also create a new target resource group by selecting **Create new**.
+
+ :::image type="Location and resource group" source="./media/azure-to-azure-how-to-enable-replication-cmk-disks/resource-group.png" alt-text="Screenshot of Location and resource group.":::
+
+ 1. Under **Network**,
+ - **Failover virtual network**: Select the failover virtual network.
+ >[!Note]
+ > You can also create a new failover virtual network by selecting **Create new**.
+ - **Failover subnet**: Select the failover subnet.
+
+ :::image type="Network" source="./media/azure-to-azure-how-to-enable-replication-cmk-disks/network.png" alt-text="Screenshot of Network.":::
+
+ 1. **Storage**: Select **View/edit storage configuration**. **Customize target settings** page opens.
+
+ :::image type="Storage" source="./media/azure-to-azure-how-to-enable-replication-cmk-disks/storage.png" alt-text="Screenshot of Storage.":::
+
+ - **Replica-managed disk**: Site Recovery creates new replica-managed disks in the target region to mirror the source VM's managed disks with the same storage type (Standard or premium) as the source VM's managed disk.
+ - **Cache storage**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard.
+
+ 1. **Availability options**: Select appropriate availability option for your VM in the target region. If an availability set that was created by Site Recovery already exists, it's reused. Select **View/edit availability options** to view or edit the availability options.
+ >[!NOTE]
+ >- While configuring the target availability sets, configure different availability sets for differently sized VMs.
+ >- You cannot change the availability type - single instance, availability set or availability zone, after you enable replication. You must disable and enable replication to change the availability type.
+
+ :::image type="Availability option" source="./media/azure-to-azure-how-to-enable-replication-cmk-disks/availability-option.png" alt-text="Screenshot of availability option.":::
+
+ 1. **Capacity reservation**: Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group or use an existing one. For more information, see [how capacity reservation works](https://learn.microsoft.com/azure/virtual-machines/capacity-reservation-overview).
+ Select **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM will be created in the assigned Capacity Reservation Group.
+
+ :::image type="Capacity reservation" source="./media/azure-to-azure-how-to-enable-replication-cmk-disks/capacity-reservation.png" alt-text="Screenshot of capacity reservation.":::
+
+ 1. **Storage encryption settings**: Site Recovery needs the disk encryption set(s)(DES) to be used for replica and target managed disks. You must pre-create Disk encryption sets in the target subscription and the target region before enabling the replication. By default, a Disk encryption set is not selected. You must select **View/edit configuration** to choose a Disk encryption set per source disk.
+
+ >[!Note]
+ >Ensure that the Target DES is present in the Target Resource Group, and that the Target DES has Get, Wrap Key, Unwrap Key access to a Key Vault in the same region.
+
+ :::image type="Storage encryption settings" source="./media/azure-to-azure-how-to-enable-replication-cmk-disks/storage-encryption-settings.png" alt-text="Screenshot of storage encryption settings.":::
+
+1. Select **Next**.
+1. In **Manage**, do the following:
+ 1. Under **Replication policy**,
+ - **Replication policy**: Select the replication policy. Defines the settings for recovery point retention history and app-consistent snapshot frequency. By default, Site Recovery creates a new replication policy with default settings of 24 hours for recovery point retention.
+ - **Replication group**: Create replication group to replicate VMs together to generate Multi-VM consistent recovery points. Note that enabling multi-VM consistency can impact workload performance and should only be used if machines are running the same workload and you need consistency across multiple machines.
+ 1. Under **Extension settings**,
+ - Select **Update settings** and **Automation account**.
+
+ :::image type="manage" source="./media/azure-to-azure-how-to-enable-replication-cmk-disks/manage.png" alt-text="Screenshot that displays the manage tab.":::
+
+1. Select **Next**
+1. In **Review**, review the VM settings and select **Enable replication**.
+
+ :::image type="review" source="./media/azure-to-azure-how-to-enable-replication-cmk-disks/review.png" alt-text="Screenshot that displays the review tab.":::
>[!NOTE] >During initial replication, the status might take some time to refresh, without apparent progress. Click **Refresh** to get the latest status.
Follow these steps to modify the Site Recovery default target settings.
* I have enabled both platform and customer managed keys, how can I protect my disks?
- Enabling double encryption with both platform and customer managed keys is suppprted by Site Recovery. Follow the instructions in this article to protect your machine. You need to create a double encryption enabled DES in the target region in advance. At the time of enabling the replication for such a VM, you can provide this DES to Site Recovery.
+ Enabling double encryption with both platform and customer managed keys is supported by Site Recovery. Follow the instructions in this article to protect your machine. You need to create a double encryption enabled DES in the target region in advance. At the time of enabling the replication for such a VM, you can provide this DES to Site Recovery.
+
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
description: Learn how to configure replication to another region for Azure VMs,
Previously updated : 04/29/2018- Last updated : 09/16/2022
Prerequisites should be in place, and you should have created a Recovery Service
## Enable replication
-Enable replication. This procedure assumes that the primary Azure region is East Asia, and the secondary region is South East Asia.
-
-1. In the vault, click **+Replicate**.
-2. Note the following fields:
- - **Source**: The point of origin of the VMs, which in this case is **Azure**.
- - **Source location**: The Azure region from where you want to protect your VMs. For this illustration, the source location is 'East Asia'
- >[!NOTE]
- >For cross-regional disaster recovery, the source location should be different from the Recovery Services Vault and it's Resource Group's location. However, it can be same as any of them for zonal disaster recovery.
- >
- - **Deployment model**: Azure deployment model of the source machines.
- - **Source subscription**: The subscription to which your source VMs belong. This can be any subscription within the same Azure Active Directory tenant where your recovery services vault exists.
- - **Resource Group**: The resource group to which your source virtual machines belong. All the VMs under the selected resource group are listed for protection in the next step.
- - **Disaster Recovery between Availability Zones**: Select yes if you want to perform zonal disaster recovery on virtual machines.
- - **Availability Zones**: Select the availability zone where the source virtual machines are pinned.
-
- ![Screenshot that highlights the fields needed to configure replication.](./media/azure-to-azure-how-to-enable-replication/enabled-rwizard-1.png)
-
-3. In **Virtual Machines > Select virtual machines**, click and select each VM that you want to replicate. You can only select machines for which replication can be enabled. Then click **OK**.
- ![Screenshot that highlights where you select virtual machines.](./media/azure-to-azure-how-to-enable-replication/virtual-machine-selection.png)
-
-4. In **Settings**, you can optionally configure target site settings:
-
- - **Target Location**: The location where your source virtual machine data will be replicated. Depending upon your selected machines location, Site Recovery will provide you the list of suitable target regions. We recommend that you keep the target location the same as the Recovery Services vault location.
- - **Target subscription**: The target subscription used for disaster recovery. By default, the target subscription will be same as the source subscription.
- - **Target resource group**: The resource group to which all your replicated virtual machines belong.
- - By default Site Recovery creates a new resource group in the target region with an "asr" suffix in the name.
- - If the resource group created by Site Recovery already exists, it's reused.
- - You can customize the resource group settings.
- - The location of the target resource group can be any Azure region, except the region in which the source VMs are hosted.
- - **Target virtual network**: By default, Site Recovery creates a new virtual network in the target region with an "asr" suffix in the name. This is mapped to your source network, and used for any future protection. [Learn more](./azure-to-azure-network-mapping.md) about network mapping.
- - **Replica-managed disks (source VM uses managed disks)**: Site Recovery creates new replica-managed disks in the target region to mirror the source VM's managed disks with the same storage type (Standard or premium) as the source VM's managed disk.
- - **Cache Storage accounts**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard.
- - **Target availability sets**: By default, Site Recovery creates a new availability set in the target region with the "asr" suffix in the name, for VMs that are part of an availability set in the source region. If the availability set created by Site Recovery already exists, it's reused.
- >[!NOTE]
- >While configuring the target availability sets, please configure different availability sets for differently sized VMs.
- >
- - **Target availability zones**: By default, Site Recovery assigns the same zone number as the source region in target region if the target region supports availability zones.
-
- If the target region does not support availability zones, the target VMs are configured as single instances by default. If necessary, you can configure such VMs to be part of availability sets in target region by clicking 'Customize'.
+Use the following procedure to replicate Azure VMs to another Azure region. As an example, primary Azure region is Eastasia, and the secondary is Southeast Asia.
+1. In the vault > **Site Recovery** page, under **Azure virtual machines**, select **Enable replication**.
+1. In the **Enable replication** page, under **Source**, do the following:
+ - **Region**: Select the Azure region from where you want to protect your VMs.
+ For example, the source location is *East Asia*.
>[!NOTE]
- >You cannot change the availability type - single instance, availability set or availability zone, after you enable replication. You need to disable and enable replication to change the availability type.
- >
-
- - **Replication Policy**: It defines the settings for retention period of recovery points and app-consistent snapshot frequency. By default, Azure Site Recovery creates a default replication policy with the following settings:
- - One day of retention for recovery points.
- - No app-consistent snapshots.
-
- ![Screenshot that displays the enable replication parameters.](./media/azure-to-azure-how-to-enable-replication/enabled-rwizard-3.PNG)
-
-5. After the VMs are enabled for replication, you can check the status of VM health under **Replicated items**. The time taken for initial replication depends on various factors such as the disk size, used storage on the disks, etc. Data transfer happens at ~23% of the disk throughput. Initial replication creates snapshot of disk and transfer that snapshot.
+ >For cross-regional disaster recovery, the source location should be different from the Recovery Services Vault and its Resource Group's location. However, it can be the same as any of them for zonal disaster recovery.
+ - **Subscription**: Select the subscription to which your source VMs belong. This can be any subscription within the same Azure Active Directory tenant where your recovery services vault exists.
+ - **Resource group**: Select the resource group to which your source virtual machines belong. All the VMs in the selected resource group are listed for protection in the next step.
+ - **Virtual machine deployment model**: Select Azure deployment model of the source machines.
+ - **Disaster recovery between availability zones**: Select **Yes** if you want to perform zonal disaster recovery on virtual machines.
+
+ :::image type="fields needed to configure replication" source="./media/azure-to-azure-how-to-enable-replication/source.png" alt-text="Screenshot that highlights the fields needed to configure replication.":::
+1. Select **Next**.
+1. In **Virtual machines**, select each VM that you want to replicate. You can only select machines for which replication can be enabled. You can select up to ten VMs. Then select **Next**.
+
+ :::image type="Virtual machine selection" source="./media/azure-to-azure-how-to-enable-replication/virtual-machine-selection.png" alt-text="Screenshot that highlights where you select virtual machines.":::
+
+1. In **Replication settings**, you can configure the following settings:
+ 1. Under **Location and Resource group**,
+ - **Target location**: Select the location where your source virtual machine data must be replicated. Depending on the location of selected machines, Site Recovery will provide you the list of suitable target regions. We recommend that you keep the target location the same as the Recovery Services vault location.
+ - **Target subscription**: Select the target subscription used for disaster recovery. By default, the target subscription will be same as the source subscription.
+ - **Target resource group**: Select the resource group to which all your replicated virtual machines belong.
+ - By default, Site Recovery creates a new resource group in the target region with an *asr* suffix in the name.
+ - If the resource group created by Site Recovery already exists, it's reused.
+ - You can customize the resource group settings.
+ - The location of the target resource group can be any Azure region, except the region in which the source VMs are hosted.
+
+ >[!Note]
+ > You can also create a new target resource group by selecting **Create new**.
+
+ :::image type="Location and resource group" source="./media/azure-to-azure-how-to-enable-replication/resource-group.png" alt-text="Screenshot of Location and resource group.":::
+
+ 1. Under **Network**,
+ - **Failover virtual network**: Select the failover virtual network.
+ >[!Note]
+ > You can also create a new failover virtual network by selecting **Create new**.
+ - **Failover subnet**: Select the failover subnet.
+
+ :::image type="Network" source="./media/azure-to-azure-how-to-enable-replication/network.png" alt-text="Screenshot of Network.":::
+
+ 1. **Storage**: Select **View/edit storage configuration**. **Customize target settings** page opens.
+
+ :::image type="Storage" source="./media/azure-to-azure-how-to-enable-replication/storage.png" alt-text="Screenshot of Storage.":::
+
+ - **Replica-managed disk**: Site Recovery creates new replica-managed disks in the target region to mirror the source VM's managed disks with the same storage type (Standard or premium) as the source VM's managed disk.
+ - **Cache storage**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard.
+ >[!Note]
+ >Azure Site Recovery supports High churn (Public Preview) where you can choose to use **High Churn** for the VM. You can use a *Premium Block Blob* type of storage account. By default, **Normal Churn** is selected. For more information, see [Azure VM Disaster Recovery - High Churn Support](https://learn.microsoft.com/azure/site-recovery/concepts-azure-to-azure-high-churn-support).
+
+ :::image type="Cache storage" source="./media/azure-to-azure-how-to-enable-replication/cache-storage.png" alt-text="Screenshot of customize target settings.":::
+
+ 1. **Availability options**: Select appropriate availability option for your VM in the target region. If an availability set that was created by Site Recovery already exists, it's reused. Select **View/edit availability options** to view or edit the availability options.
+ >[!NOTE]
+ >- While configuring the target availability sets, configure different availability sets for differently sized VMs.
+ >- You cannot change the availability type - single instance, availability set or availability zone, after you enable replication. You must disable and enable replication to change the availability type.
+
+ :::image type="Availability option" source="./media/azure-to-azure-how-to-enable-replication/availability-option.png" alt-text="Screenshot of availability option.":::
+
+ 1. **Capacity reservation**: Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group or use an existing one. For more information, see [how capacity reservation works](https://learn.microsoft.com/azure/virtual-machines/capacity-reservation-overview).
+ Select **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM will be created in the assigned Capacity Reservation Group.
+
+ :::image type="Capacity reservation" source="./media/azure-to-azure-how-to-enable-replication/capacity-reservation.png" alt-text="Screenshot of capacity reservation.":::
+
+1. Select **Next**.
+1. In **Manage**, do the following:
+ 1. Under **Replication policy**,
+ - **Replication policy**: Select the replication policy. Defines the settings for recovery point retention history and app-consistent snapshot frequency. By default, Site Recovery creates a new replication policy with default settings of 24 hours for recovery point retention.
+ - **Replication group**: Create replication group to replicate VMs together to generate Multi-VM consistent recovery points. Note that enabling multi-VM consistency can impact workload performance and should only be used if machines are running the same workload and you need consistency across multiple machines.
+ 1. Under **Extension settings**,
+ - Select **Update settings** and **Automation account**.
+
+ :::image type="manage" source="./media/azure-to-azure-how-to-enable-replication/manage.png" alt-text="Screenshot that displays the manage tab.":::
+
+1. Select **Next**
+1. In **Review**, review the VM settings and select **Enable replication**.
+
+ :::image type="review" source="./media/azure-to-azure-how-to-enable-replication/review.png" alt-text="Screenshot that displays the review tab.":::
+
+1. After the VMs are enabled for replication, you can check the status of VM health under **Replicated items**. The time taken for initial replication depends on various factors such as the disk size, used storage on the disks, etc. Data transfer happens at ~23% of the disk throughput. Initial replication creates a snapshot of the disk and transfers that snapshot.
### Enable replication for added disks
If you add disks to an Azure VM for which replication is enabled, the following
To enable replication for an added disk, do the following: 1. In the vault > **Replicated Items**, click the VM to which you added the disk.
-2. Click **Disks**, and then select the data disk for which you want to enable replication (these disks have a **Not protected** status).
-3. In **Disk Details**, click **Enable replication**.
+1. Click **Disks**, and then select the data disk for which you want to enable replication (these disks have a **Not protected** status).
+1. In **Disk Details**, click **Enable replication**.
![Screenshot that displays replication enabled for a newly added disk.](./media/azure-to-azure-how-to-enable-replication/enabled-added.png) After the enable replication job runs, and the initial replication finishes, the replication health warning for the disk issue is removed. -
-## Customize target resources
-
-You can modify the default target settings used by Site Recovery.
-
-1. Click **Customize:** next to 'Target subscription' to modify the default target subscription. Select the subscription from the list of all the subscriptions available in the same Azure Active Directory (AAD) tenant.
-
-2. Click **Customize:** to modify default settings:
- - In **Target resource group**, select the resource group from the list of all the resource groups in the target location of the subscription.
- - In **Target virtual network**, select the network from a list of all the virtual network in the target location.
- - In **Cache storage**, select the storage you want from the list of available cache storage.
- - In **Target availability type**, select the availability type from a list of all the availability type in the target location.
- - In **Target proximity placement group**, select the proximity placement group from a list of all the proximity placement group in the target location.
-
- ![Screenshot that shows how to customize target subscription settings.](./media/azure-to-azure-how-to-enable-replication/customize.PNG)
-3. Click **Customize:** to modify replication settings.
-4. In **Multi-VM consistency**, select the VMs that you want to replicate together.
- - All the machines in a replication group will have shared crash consistent and app-consistent recovery points when failed over.
- - Enabling multi-VM consistency can impact workload performance (as it is CPU intensive). It should only be enabled if machines are running the same workload, and you need consistency across multiple machines.
- - For example, if an application has 2 SQL Server virtual machines and two web servers, then you should add only the SQL Server VMs to a replication group.
- - You can choose to have a maximum of 16 VMs in a replication group.
- - If you enable multi-VM consistency, machines in the replication group communicate with each other over port 20004.
- - Ensure there's no firewall appliance blocking the internal communication between the VMs over port 20004.
- - If you want Linux VMs to be part of a replication group, ensure the outbound traffic on port 20004 is manually opened according to guidance for the specific Linux version.
-![Screenshot that shows the Multi-VM consistency settings.](./media/azure-to-azure-how-to-enable-replication/multi-vm-settings.PNG)
-
-5. Click **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM will be created in the assigned Capacity Reservation Group.
-
- Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group, or use an existing one. For more information on how capacity reservation works, [read here](../virtual-machines/capacity-reservation-overview.md).
-
- ![Screenshot that shows the Capacity Reservation settings.](./media/azure-to-azure-how-to-enable-replication/capacity-reservation-edit-button.png)
-1. Click **Create target resource** > **Enable Replication**.
-1. After the VMs are enabled for replication, you can check the status of VM health under **Replicated items**
- >[!NOTE] > > - During initial replication the status might take some time to refresh, without progress. Click the **Refresh** button, to get the latest status.
site-recovery Azure To Azure Tutorial Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-enable-replication.md
Title: Tutorial to set up Azure VM disaster recovery with Azure Site Recovery description: In this tutorial, set up disaster recovery for Azure VMs to another Azure region, using the Site Recovery service. Previously updated : 04/29/2022 Last updated : 08/22/2022 #Customer intent: As an Azure admin, I want to set up disaster recovery for my Azure VMs, so that they're available in a secondary region if the primary region becomes unavailable.
In the vault settings, select **Enable Site Recovery**.
## Enable replication
-Select the source settings, and enable VM replication.
+Select the source settings and enable VM replication.
### Select source settings 1. In the vault > **Site Recovery** page, under **Azure virtual machines**, select **Enable replication**.
- ![Selection to enable replication for Azure VMs](./media/azure-to-azure-tutorial-enable-replication/enable-replication.png)
+ ![Screenshot showing selection to enable replication for Azure VMs.](./media/azure-to-azure-tutorial-enable-replication/enable-replication.png)
-2. In **Source**> **Source location**, select the source Azure region in which VMs are currently running.
-3. In **Azure virtual machine deployment model**, leave the default **Resource Manager** setting.
-4. In **Source subscription**, select the subscription in which VMs are running. You can select any subscription that's in the same Azure Active Directory (AD) tenant as the vault.
-5. In **Source resource group**, select the resource group containing the VMs.
-6. In **Disaster recovery between availability zones**, leave the default **No** setting.
+2. In the **Enable replication** page, under **Source** tab, do the following:
+ - **Region**: Select the source Azure region in which VMs are currently running.
+ - **Subscription**: Select the subscription in which VMs are running. You can select any subscription that's in the same Azure Active Directory (Azure AD) tenant as the vault.
+ - **Resource group**: Select the desired resource group from the drop-down.
+ - **Virtual machine deployment model**: Retain the default **Resource Manager** setting.
+ - **Disaster recovery between availability zones**: Retain the default **No** setting.
+
+ :::image type="Set up source" source="./media/azure-to-azure-tutorial-enable-replication/source.png" alt-text="Screenshot showing how to set up source.":::
- ![Set up source](./media/azure-to-azure-tutorial-enable-replication/source.png)
-
-7. Select **Next**.
+3. Select **Next**.
### Select the VMs Site Recovery retrieves the VMs associated with the selected subscription/resource group.
-1. In **Virtual Machines**, select the VMs you want to enable for disaster recovery.
+1. In **Virtual machines**, select the VMs you want to enable for disaster recovery. You can select up to 10 VMs.
- ![Page to select VMs for replication](./media/azure-to-azure-tutorial-enable-replication/select-vm.png)
+ :::image type="Virtual machine selection" source="./media/azure-to-azure-tutorial-enable-replication/virtual-machine-selection.png" alt-text="Screenshot that highlights where you select virtual machines.":::
2. Select **Next**. ### Review replication settings 1. In **Replication settings**, review the settings. Site Recovery creates default settings/policy for the target region. For the purposes of this tutorial, we use the default settings.
-2. Select **Enable replication**.
- ![Page to customize settings and enable replication](./media/azure-to-azure-tutorial-enable-replication/enable-vm-replication.png)
+2. Select **Next**.
+
+ :::image type="enable replication" source="./media/azure-to-azure-tutorial-enable-replication/enable-vm-replication.png" alt-text="Screenshot to customize settings and enable replication.":::
+
+### Manage
+
+1. In **Manage**, do the following:
+ 1. Under **Replication policy**,
+ - **Replication policy**: Select the replication policy. Defines the settings for recovery point retention history and app-consistent snapshot frequency. By default, Site Recovery creates a new replication policy with default settings of 24 hours for recovery point retention.
+ - **Replication group**: Create replication group to replicate VMs together to generate Multi-VM consistent recovery points. Note that enabling multi-VM consistency can impact workload performance and should only be used if machines are running the same workload and you need consistency across multiple machines.
+ 1. Under **Extension settings**,
+ - Select **Update settings** and **Automation account**.
+
+ :::image type="manage" source="./media/azure-to-azure-tutorial-enable-replication/manage.png" alt-text="Screenshot showing manage tab.":::
+
+1. Select **Next**.
+
+### Review
-3. Track replication progress in the notifications.
+In **Review**, review the VM settings and select **Enable replication**.
- ![Track progress in notifications](./media/azure-to-azure-tutorial-enable-replication/notification.png)
- ![Track successful replication notification](./media/azure-to-azure-tutorial-enable-replication/notification-success.png)
-4. The VMs you enable appear on the vault > **Replicated items** page.
+The VMs you enable appear on the vault > **Replicated items** page.
- ![VM on the Replicated Items page](./media/azure-to-azure-tutorial-enable-replication/replicated-items.png)
+![Screenshot of VM on the Replicated Items page](./media/azure-to-azure-tutorial-enable-replication/replicated-items.png)
## Next steps
site-recovery Deploy Vmware Azure Replication Appliance Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-modernized.md
+
+ Title: Deploy Azure Site Recovery replication appliance - Modernized
+description: This article describes support and requirements when deploying the replication appliance for VMware disaster recovery to Azure with Azure Site Recovery - Modernized
++ Last updated : 09/21/2022++
+# Deploy Azure Site Recovery replication appliance - Modernized
+
+>[!NOTE]
+> The information in this article applies to Azure Site Recovery - Modernized. For information about configuration server requirements in Classic releases, [see this article](vmware-azure-configuration-server-requirements.md).
+
+>[!NOTE]
+> Ensure you create a new and exclusive Recovery Services vault for setting up the ASR replication appliance. Don't use an existing vault.
+
+You deploy an on-premises replication appliance when you use [Azure Site Recovery](site-recovery-overview.md) for disaster recovery of VMware VMs or physical servers to Azure.
+
+- The replication appliance coordinates communications between on-premises VMware and Azure. It also manages data replication.
+- [Learn more](vmware-azure-architecture-modernized.md) about the Azure Site Recovery replication appliance components and processes.
+
+## Pre-requisites
+
+### Hardware requirements
+
+**Component** | **Requirement**
+ |
+CPU cores | 8
+RAM | 32 GB
+Number of disks | 3, including the OS disk - 80 GB, data disk 1 - 620 GB, data disk 2 - 620 GB
+
+### Software requirements
+
+**Component** | **Requirement**
+ |
+Operating system | Windows Server 2016
+Operating system locale | English (en-*)
+Windows Server roles | Don't enable these roles: <br> - Active Directory Domain Services <br>- Internet Information Services <br> - Hyper-V
+Group policies | Don't enable these group policies: <br> - Prevent access to the command prompt. <br> - Prevent access to registry editing tools. <br> - Trust logic for file attachments. <br> - Turn on Script Execution. <br> [Learn more](/previous-versions/windows/it-pro/windows-7/gg176671(v=ws.10))
+IIS | - No pre-existing default website <br> - No pre-existing website/application listening on port 443 <br>- Enable [anonymous authentication](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731244(v=ws.10)) <br> - Enable [FastCGI](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753077(v=ws.10)) setting
+FIPS (Federal Information Processing Standards) | Do not enable FIPS mode|
+
+### Network requirements
+
+|**Component** | **Requirement**|
+| | |
+|Fully qualified domain name (FQDN) | Static|
+|Ports | 443 (Control channel orchestration)<br>9443 (Data transport)|
+|NIC type | VMXNET3 (if the appliance is a VMware VM)|
++
+#### Allow URLs
+
+Ensure the following URLs are allowed and reachable from the Azure Site Recovery replication appliance for continuous connectivity:
+
+ | **URL** | **Details** |
+ | - | -|
+ | portal.azure.com | Navigate to the Azure portal. |
+ | `*.windows.net `<br>`*.msftauth.net`<br>`*.msauth.net`<br>`*.microsoft.com`<br>`*.live.com `<br>`*.office.com ` | To sign-in to your Azure subscription. |
+ |`*.microsoftonline.com `|Create Azure Active Directory (AD) apps for the appliance to communicate with Azure Site Recovery. |
+ |management.azure.com |Create Azure AD apps for the appliance to communicate with the Azure Site Recovery service. |
+ |`*.services.visualstudio.com `|Upload app logs used for internal monitoring. |
+ |`*.vault.azure.net `|Manage secrets in the Azure Key Vault. Note: Ensure machines to replicate have access to this. |
+ |aka.ms |Allow access to also known as links. Used for Azure Site Recovery appliance updates. |
+ |download.microsoft.com/download |Allow downloads from Microsoft download. |
+ |`*.servicebus.windows.net `|Communication between the appliance and the Azure Site Recovery service. |
+ |`*.discoverysrv.windowsazure.com `<br><br>`*.hypervrecoverymanager.windowsazure.com `<br><br> `*.backup.windowsazure.com ` |Connect to Azure Site Recovery micro-service URLs.
+ |`*.blob.core.windows.net `|Upload data to Azure storage which is used to create target disks. |
++
+### Folder exclusions from Antivirus program
+
+#### If Antivirus Software is active on appliance
+
+Exclude following folders from Antivirus software for smooth replication and to avoid connectivity issues.
+
+C:\ProgramData\Microsoft Azure <br>
+C:\ProgramData\ASRLogs <br>
+C:\Windows\Temp\MicrosoftAzure
+C:\Program Files\Microsoft Azure Appliance Auto Update <br>
+C:\Program Files\Microsoft Azure Appliance Configuration Manager <br>
+C:\Program Files\Microsoft Azure Push Install Agent <br>
+C:\Program Files\Microsoft Azure RCM Proxy Agent <br>
+C:\Program Files\Microsoft Azure Recovery Services Agent <br>
+C:\Program Files\Microsoft Azure Server Discovery Service <br>
+C:\Program Files\Microsoft Azure Site Recovery Process Server <br>
+C:\Program Files\Microsoft Azure Site Recovery Provider <br>
+C:\Program Files\Microsoft Azure to On-Premises Reprotect agent <br>
+C:\Program Files\Microsoft Azure VMware Discovery Service <br>
+C:\Program Files\Microsoft On-Premise to Azure Replication agent <br>
+E:\ <br>
+
+#### If Antivirus software is active on Source machine
+
+If source machine has an Antivirus software active, installation folder should be excluded. So, exclude folder C:\ProgramData\ASR\agent for smooth replication.
+
+## Sizing and capacity
+An appliance that uses an in-built process server to protect the workload can handle up to 200 virtual machines, based on the following configurations:
+
+ |CPU | Memory | Cache disk size | Data change rate | Protected machines |
+ ||-|--||-|
+ |16 vCPUs (2 sockets * 8 cores @ 2.5 GHz) | 32 GB | 1 TB | >1 TB to 2 TB | Use to replicate 151 to 200 machines.|
+
+- You can perform discovery of all the machines in a vCenter server, using any of the replication appliances in the vault.
+
+- You can [switch a protected machine](switch-replication-appliance-modernized.md), between different appliances in the same vault, given the selected appliance is healthy.
+
+For detailed information about how to use multiple appliances and failover a replication appliance, see [this article](switch-replication-appliance-modernized.md)
++
+## Prepare Azure account
+
+To create and register the Azure Site Recovery replication appliance, you need an Azure account with:
+
+- Contributor or Owner permissions on the Azure subscription.
+- Permissions to register Azure Active Directory apps.
+- Owner or Contributor plus User Access Administrator permissions on the Azure subscription to create a Key Vault, used during registration of the Azure Site Recovery replication appliance with Azure.
+
+If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner for the required permissions.
+
+## Required permissions
+
+**Here are the required key vault permissions**:
+
+- Microsoft.OffAzure/*
+- Microsoft.KeyVault/register/action
+- Microsoft.KeyVault/vaults/read
+- Microsoft.KeyVault/vaults/keys/read
+- Microsoft.KeyVault/vaults/secrets/read
+- Microsoft.Recoveryservices/*
+
+**Follow these steps to assign the required permissions**:
+
+1. In the Azure portal, search for **Subscriptions**, and under **Services**, select **Subscriptions** search box to search for the Azure subscription.
+
+2. In the **Subscriptions** page, select the subscription in which you created the Recovery Services vault.
+
+3. In the selected subscription, select **Access control** (IAM) > **Check access**. In **Check access**, search for the relevant user account.
+
+4. In **Add a role assignment**, select **Add,** select the Contributor or Owner role, and select the account. Then Select **Save**.
+
+ To register the appliance, your Azure account needs permissions to register Azure Active Directory apps.
+
+ **Follow these steps to assign required permissions**:
+
+ - In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**. In **User settings**, verify that Azure AD users can register applications (set to *Yes* by default).
+
+ - In case the **App registrations** settings is set to *No*, request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the Application Developer role to an account to allow the registration of Azure Active Directory App.
++
+## Prepare infrastructure
+
+You need to set up an Azure Site Recovery replication appliance in the on-premises environment to enable recovery on your on-premises machine. For detailed information on the operations performed by the appliance [see this section](vmware-azure-architecture-modernized.md)
+
+Go to **Recovery Services Vault** > **Getting Started**. In VMware machines to Azure, select
+**Prepare Infrastructure** and proceed with the sections detailed below:
++
+To set up a new appliance, you can use an OVF template (recommended) or PowerShell. Ensure you meet all the [hardware ](#hardware-requirements) and [software requirements](#software-requirements), and any other prerequisites.
+
+## Create Azure Site Recovery replication appliance
+
+You can create the Site Recovery replication appliance by using the OVF template or through PowerShell.
+
+>[!NOTE]
+> The appliance setup needs to be performed in a sequential manner. Parallel registration of multiple appliances cannot be executed.
++
+### Create replication appliance through OVF template
+
+We recommend this approach as Azure Site Recovery ensures all prerequisite configurations are handled by the template.
+The OVF template spins up a machine with the required specifications.
++
+**Follow these steps:**
+
+1. Download the OVF template to set up an appliance on your on-premises environment.
+2. After the deployment is complete, power on the appliance VM to accept Microsoft Evaluation license.
+3. In the next screen, provide password for the administrator user.
+4. Select **Finalize,** the system reboots and you can login with the administrator user account.
+
+### Set up the appliance through PowerShell
+
+In case of any organizational restrictions, you can manually set up the Site Recovery replication appliance through PowerShell. Follow these steps:
+
+1. Download the installers from [here](https://aka.ms/V2ARcmApplianceCreationPowershellZip) and place this folder on the Azure Site Recovery replication appliance.
+2. After successfully copying the zip folder, unzip and extract the components of the folder.
+3. Go to the path in which the folder is extracted to and execute the following PowerShell script as an administrator:
+
+ **DRInstaller.ps1**
+
+## Register appliance
+ Once you create the appliance, Microsoft Azure appliance configuration manager is launched automatically. Prerequisites such as internet connectivity, Time sync, system configurations and group policies (listed below) are validated.
+
+ - CheckRegistryAccessPolicy - Prevents access to registry editing tools.
+ - Key: HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
+ - DisableRegistryTools value should not be equal 0.
+
+ - CheckCommandPromptPolicy - Prevents access to the command prompt.
+
+ - Key: HKLM\SOFTWARE\Policies\Microsoft\Windows\System
+ - DisableCMD value should not be equal 0.
+
+ - CheckTrustLogicAttachmentsPolicy - Trust logic for file attachments.
+
+ - Key: HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Attachments
+ - UseTrustedHandlers value should not be equal 3.
+
+ - CheckPowershellExecutionPolicy - Turn on Script Execution.
+
+ - PowerShell execution policy shouldn't be AllSigned or Restricted
+ - Ensure the group policy 'Turn on Script Execution Attachment Manager' is not set to Disabled or 'Allow only signed scripts'
++
+ **Use the following steps to register the appliance**:
+
+1. Configure the proxy settings by toggling on the **use proxy to connect to internet** option.
+
+ All Azure Site Recovery services will use these settings to connect to the internet. Only HTTP proxy is supported.
+
+2. Ensure the [required URLs](#allow-urls) are allowed and are reachable from the Azure Site Recovery replication appliance for continuous connectivity.
+
+3. Once the prerequisites have been checked, in the next step information about all the appliance components will be fetched. Review the status of all components and then click **Continue**. After saving the details, proceed to choose the appliance connectivity.
+
+4. After saving connectivity details, Select **Continue** to proceed to registration with Microsoft Azure.
+
+5. Ensure the [prerequisites](#pre-requisites) are met, proceed with registration.
+
+ :::image type="Register appliance" source="./media/deploy-vmware-azure-replication-appliance-modernized/app-setup-register.png" alt-text="Screenshot showing register appliance.":::
+
+ - **Friendly name of appliance**: Provide a friendly name with which you want to track this appliance in the Azure portal under recovery services vault infrastructure.
+
+ - **Azure Site Recovery replication appliance key**: Copy the key from the portal by navigating to **Recovery Services vault** > **Getting started** > **Site Recovery** > **VMware to Azure: Prepare Infrastructure**.
+
+ - After pasting the key, click **Login.** You will be redirected to a new authentication tab.
+
+ By default, an authentication code will be generated as highlighted below, in the **Appliance configuration manager** page. Use this code in the authentication tab.
+
+ - Enter your Microsoft Azure credentials to complete registration.
+
+ After successful registration, you can close the tab and move to appliance configuration manager to continue the set up.
+
+ :::image type="authentication code" source="./media/deploy-vmware-azure-replication-appliance-modernized/enter-code.png" alt-text="Screenshot showing authentication code.":::
+
+ > [!NOTE]
+ > An authentication code expires within 5 minutes of generation. In case of inactivity for more than this duration, you will be prompted to login again to Azure.
++
+6. After successful login, Subscription, Resource Group and Recovery Services vault details are displayed. You can log out in case you want to change the vault. Else, select **Continue** to proceed.
+
+ :::image type="Appliance registered" source="./media/deploy-vmware-azure-replication-appliance-modernized/app-setup.png" alt-text="Screenshot showing appliance registered.":::
+
+ After successful registration, proceed to configure vCenter details.
+
+ :::image type="Configuration of vCenter" source="./media/deploy-vmware-azure-replication-appliance-modernized/vcenter-information.png" alt-text="Screenshot showing configuration of vCenter.":::
+
+7. Select **Add vCenter Server** to add vCenter information. Enter the server name or IP address of the vCenter and port information. Post that, provide username, password, and friendly name. This is used to fetch details of [virtual machine managed through the vCenter](vmware-azure-tutorial-prepare-on-premises.md#prepare-an-account-for-automatic-discovery). The user account details will be encrypted and stored locally in the machine.
+
+ >[!NOTE]
+ >If you're trying to add the same vCenter Server to multiple appliances, then ensure that the same friendly name is used in all the appliances.
+
+8. After successfully saving the vCenter information, select **Add virtual machine credentials** to provide user details of the VMs discovered through the vCenter.
+
+ >[!NOTE]
+ > - For Linux OS, ensure to provide root credentials and for Windows OS, a user account with admin privileges should be added, these credentials will be used to push install mobility agent on to the source VM during enable replication operation. The credentials can be chosen per VM in the Azure portal during enable replication workflow.
+ > - Visit the appliance configurator to edit or add credentials to access your machines.
+
+9. After you add the vCenter details, expand **Provide Physical server details** to add the details of any physical servers you plan to protect.
+
+ :::image type="Physical server credentials." source="./media/deploy-vmware-azure-replication-appliance-modernized/physical-server-credentials.png" alt-text="Screenshot of Physical server credentials.":::
+
+10. Select **Add credentials** to add the credentials of the machine(s) you plan to protect. Add all the details such as the **Operating system**, **Provide a friendly name for the credential**, **Username**, and **Password**. The user account details will be encrypted and stored locally in the machine. Select **Add**.
+
+ :::image type="Add Physical server credentials." source="./media/deploy-vmware-azure-replication-appliance-modernized/add-physical-server-credentials.png" alt-text="Screenshot of Add Physical server credentials.":::
+
+11. Select **Add server** to add physical server details. Provide the machineΓÇÖs **IP address/FQDN of physical server**, **Select credential account** and select **Add**.
+
+ :::image type="Add Physical server details." source="./media/deploy-vmware-azure-replication-appliance-modernized/add-physical-server-details.png" alt-text="Screenshot of Add Physical server details.":::
+
+12. After successfully adding the details, select **Continue** to install all Azure Site Recovery replication appliance components and register with Azure services. This activity can take up to 30 minutes.
+
+ Ensure you do not close the browser while configuration is in progress.
+
+ >[!NOTE]
+ > Appliance cloning is not supported with the modernized architecture. If you attempt to clone, it might disrupt the recovery flow.
++
+## View Azure Site Recovery replication appliance in Azure portal
+
+After successful configuration of Azure Site Recovery replication appliance, navigate to Azure portal, **Recovery Services Vault**.
+
+Select **Prepare infrastructure (Modernized)** under **Getting started**, you can see that an Azure Site Recovery replication appliance is already registered with this vault. Now you are all set! Start protecting your source machines through this replication appliance.
+
+When you click *Select 1 appliance(s)*, you will be re-directed to Azure Site Recovery replication appliance view, where the list of appliances registered to this vault is displayed.
+
+You will also be able to see a tab for **Discovered items** that lists all of the discovered vCenter Servers/vSphere hosts."
+
+![Replication appliance modernized](./media/deploy-vmware-azure-replication-appliance-modernized/discovered-items.png)
++
+## Next steps
+Set up disaster recovery of [VMware VMs](vmware-azure-set-up-replication-tutorial-modernized.md) to Azure.
site-recovery Failover Failback Overview Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/failover-failback-overview-modernized.md
+
+ Title: About failover and failback in Azure Site Recovery - Modernized
+description: Learn about failover and failback in Azure Site Recovery - Modernized
+ Last updated : 09/21/2022++
+# About on-premises disaster recovery failover/failback - Modernized
+
+This article provides an overview of failover and failback during disaster recovery of on-premises machines to Azure with [Azure Site Recovery](site-recovery-overview.md) - Modernized.
+
+For information about failover and failback in Azure Site Recovery Classic releases, [see this article](failover-failback-overview.md).
+
+## Recovery stages
+
+Failover and failback in Site Recovery has four stages:
+
+- **Stage 1: Fail over from on-premises**: After setting up replication to Azure for on-premises machines, when your on-premises site goes down, you fail those machines over to Azure. After failover, Azure VMs are created from replicated data.
+- **Stage 2: Reprotect Azure VMs**: In Azure, you reprotect the Azure VMs so that they start replicating back to the on-premises site. The on-premises VM (if available) is turned off during reprotection, to help ensure data consistency.
+- **Stage 3: Fail over from Azure**: When your on-premises site is running as normal again, you run another failover, this time to fail back Azure VMs to your on-premises site. You can fail back to the original location from which you failed over, or to an alternate location. This activity is referred as *planned failover*.
+- **Stage 4: Reprotect on-premises machines**: After failing back, again enable replication of the on-premises machines to Azure.
+
+## Failover
+
+You perform a failover as part of your business continuity and disaster recovery (BCDR) strategy.
+
+- As a first step in your BCDR strategy, you replicate your on-premises machines to Azure on an ongoing basis. Users access workloads and apps running on the on-premises source machines.
+- If the need arises, for example if there's an outage on-premises, you fail the replicating machines over to Azure. Azure VMs are created using the replicated data.
+- For business continuity, users can continue accessing apps on the Azure VMs.
+
+Failover is a two-phase activity:
+
+- **Failover**: The failover that creates and brings up an Azure VM using the selected recovery point.
+- **Commit**: After failover you verify the VM in Azure:
+ - You can then commit the failover to the selected recovery point or select a different point for the commit.
+ - After committing the failover, the recovery point can't be changed.
++
+## Connect to Azure after failover
+
+To connect to the Azure VMs created after failover using RDP/SSH, there are several requirements.
+
+**Failover** | **Location** | **Actions**
+ | |
+**Azure VM running Windows** | On the on-premises machine before failover | **Access over the internet**: Enable RDP. Make sure that TCP and UDP rules are added for **Public**, and that RDP is allowed for all profiles in **Windows Firewall** > **Allowed Apps**.<br/><br/> **Access over site-to-site VPN**: Enable RDP on the machine. Check that RDP is allowed in the **Windows Firewall** -> **Allowed apps and features**, for **Domain and Private** networks.<br/><br/> Make sure the operating system SAN policy is set to **OnlineAll**. [Learn more](https://support.microsoft.com/kb/3031135).<br/><br/> Make sure there are no Windows updates pending on the VM when you trigger a failover. Windows Update might start when you failover, and you won't be able to log onto the VM until updates are done.
+**Azure VM running Windows** | On the Azure VM after failover | [Add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) for the VM.<br/><br/> The network security group rules on the failed over VM (and the Azure subnet to which it is connected) must allow incoming connections to the RDP port.<br/><br/> Check **Boot diagnostics** to verify a screenshot of the VM. If you can't connect, check that the VM is running, and review [troubleshooting tips](https://social.technet.microsoft.com/wiki/contents/articles/31666.troubleshooting-remote-desktop-connection-after-failover-using-asr.aspx).
+**Azure VM running Linux** | On the on-premises machine before failover | Ensure that the Secure Shell service on the VM is set to start automatically on system boot.<br/><br/> Check that firewall rules allow an SSH connection to it.
+**Azure VM running Linux** | On the Azure VM after failover | The network security group rules on the failed over VM (and the Azure subnet to which it is connected) need to allow incoming connections to the SSH port.<br/><br/> [Add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) for the VM.<br/><br/> Check **Boot diagnostics** for a screenshot of the VM.<br/><br/>
+
+## Types of failover
+
+Site Recovery provides different failover options.
+
+**Failover** | **Details** | **Recovery** | **Workflow**
+ | | |
+**Test failover** | Used to run a drill that validates your BCDR strategy, without any data loss or downtime.| Creates a copy of the VM in Azure, with no impact on ongoing replication, or on your production environment. | 1. Run a test failover on a single VM, or on multiple VMs in a recovery plan.<br/><br/> 2. Select a recovery point to use for the test failover.<br/><br/> 3. Select an Azure network in which the Azure VM will be located when it's created after failover. The network is only used for the test failover.<br/><br/> 4. Verify that the drill worked as expected. Site Recovery automatically cleans up VMs created in Azure during the drill.
+**Planned failover-Hyper-V** | Usually used for planned downtime.<br/><br/> Source VMs are shut down. The latest data is synchronized before initiating the failover. | Zero data loss for the planned workflow. | 1. Plan a downtime maintenance window and notify users.<br/><br/> 2. Take user-facing apps offline.<br/><br/> 3. Initiate a planned failover with the latest recovery point. The failover doesn't run if the machine isn't shut down, or if errors are encountered.<br/><br/> 4. After the failover, check that the replica Azure VM is active in Azure.<br/><br/> 5. Commit the failover to finish up. The commit action deletes all recovery points.
+**Failover-Hyper-V** | Usually run if there's an unplanned outage, or the primary site isn't available.<br/><br/> Optionally shut down the VM and synchronize final changes before initiating the failover. | Minimal data loss for apps. | 1. Initiate your BCDR plan. <br/><br/> 2. Initiate a failover. Specify whether Site Recovery should shut down the VM and synchronize/replicate the latest changes before triggering the failover.<br/><br/> 3. You can failover to a number of recovery point options, summarized in the table below.<br/><br/> If you don't enable the option to shut down the VM, or if Site Recovery can't shut it down, the latest recovery point is used.<br/>The failover runs even if the machine can't be shut down.<br/><br/> 4. After failover, you check that the replica Azure VM is active in Azure.<br/> If required, you can select a different recovery point from the retention window of 24 hours.<br/><br/> 5. Commit the failover to finish up. The commit action deletes all available recovery points.
+**Failover-VMware** | Usually run if there's an unplanned outage, or the primary site isn't available.<br/><br/> Optionally specify that Site Recovery should try to trigger a shutdown of the VM, and to synchronize and replicate final changes before initiating the failover. | Minimal data loss for apps. | 1. Initiate your BCDR plan. <br/><br/> 2. Initiate a failover from Site Recovery. Specify whether Site Recovery should try to trigger VM shutdown and synchronize before running the failover.<br/> The failover runs even if the machines can't be shut down.<br/><br/> 3. After the failover, check that the replica Azure VM is active in Azure. <br/>If required, you can select a different recovery point from the retention window of 72 hours.<br/><br/> 5. Commit the failover to finish up. The commit action deletes all recovery points.<br/> For Windows VMs, Site Recovery disables the VMware tools during failover.
+**Planned failover-VMware** | You can perform a planned failover from Azure to on-premises. | Since it is a planned failover activity, the recovery point is generated after the planned failover job is triggered. | When the planned failover is triggered, pending changes are copied to on-premises, a latest recovery point of the VM is generated and Azure VM is shut down.<br/><br/> Follow the failover process as discussed [here](vmware-azure-tutorial-failover-failback-modernized.md#planned-failover-from-azure-to-on-premises). Post this, on-premises machine is turned on. After a successful planned failover, the machine will be active in your on-premises environment.
+
+## Failover processing
+
+In some scenarios, failover requires additional processing that takes around 8 to 10 minutes to complete. You might notice longer test failover times for:
+
+* VMware VMs that don't have the DHCP service enabled.
+* VMware VMs that don't have the following boot drivers: storvsc, vmbus, storflt, intelide, atapi.
+
+## Recovery point options
+
+During failover, you can select a number of recovery point options.
+
+**Option** | **Details**
+ |
+**Latest (lowest RPO)** | This option provides the lowest recovery point objective (RPO). It first processes all the data that has been sent to Site Recovery service, to create a recovery point for each VM, before failing over to it. This recovery point has all the data replicated to Site Recovery when the failover was triggered.
+**Latest processed** | This option fails over VMs to the latest recovery point processed by Site Recovery. To see the latest recovery point for a specific VM, check **Latest Recovery Points** in the VM settings. This option provides a low RTO (Recovery Time Objective), because no time is spent processing unprocessed data.
+**Latest app-consistent** | This option fails over VMs to the latest application-consistent recovery point processed by Site Recovery if app-consistent recovery points are enabled. Check the latest recovery point in the VM settings.
+**Latest multi-VM processed** | This option is available for recovery plans with one or more VMs that have multi-VM consistency enabled. VMs with the setting enabled failover to the latest common multi-VM consistent recovery point. Any other VMs in the plan failover to the latest processed recovery point.
+**Latest multi-VM app-consistent** | This option is available for recovery plans with one or more VMs that have multi-VM consistency enabled. VMs that are part of a replication group failover to the latest common multi-VM application-consistent recovery point. Other VMs failover to their latest application-consistent recovery point.
+**Custom** | Use this option to failover a specific VM to a particular recovery point in time. This option isn't available for recovery plans.
+
+> [!NOTE]
+> Recovery points can't be migrated to another Recovery Services vault.
+
+## Reprotection/planned failover
+
+After failover to Azure, the replicated Azure VMs are in an unprotected state.
+
+- As a first step to failing back to your on-premises site, you need to start the Azure VMs replicating to on-premises. The reprotection process depends on the type of machines you failed over.
+- After machines are replicating from Azure to on-premises, you can run a failover from Azure to your on-premises site.
+- After machines are running on-premises again, you can enable replication so that they replicate to Azure for disaster recovery.
+
+**Planned failover works as follows**:
+
+- To fail back to on-premises, a VM needs at least one recovery point in order to fail back. In a recovery plan, all VMs in the plan need at least one recovery point.
+- As this is a planned failover activity, you will be allowed to select the type of recovery point you want to fail back to. We recommend that you use a crash-consistent point.
+ - There is also an app-consistent recovery point option. In this case, a single VM recovers to its latest available app-consistent recovery point. For a recovery plan with a replication group, each replication group recovers to its common available recovery point.
+ - App-consistent recovery points can be behind in time, and there might be loss in data.
+- During failover from Azure to the on-premises site, Site Recovery shuts down the Azure VMs. When you commit the failover, Site Recovery removes the failed back Azure VMs in Azure.
++
+## VMware/physical reprotection/failback
+
+To reprotect and fail back VMware machines and physical servers from Azure to on-premises, ensure that you have a healthy appliance.
+
+**Appliance selection**
+
+- You can select any of the Azure Site Recovery replication appliances registered under a vault to re-protect to on-premises. You do not require a separate Process server in Azure for re-protect operation and a scale-out Master Target server for Linux VMs.
+- Replication appliance doesnΓÇÖt require additional network connection/ports (as compared with forward protection) during failback. Same appliance can be used for forward and backward protections if it is in healthy state. It should not impact the performance of the replications.
+- When selecting target datastore, ensure that the ESX Host where the replication appliance is located is able to access it.
+ > [!NOTE]
+ > Storage vMotion of replication appliance is not supported after re-protect operation.
++
+**Re-protect job**
+
+- If this is a new re-protect operation, then by default, a new log storage account will be automatically created by Azure Site Recovery in the target region. Retention disk is not required.
+- In case of Alternate Location Recovery and Original Location Recovery, the original configurations of source machines will be retrieved.
+ > [!NOTE]
+ > - Static IP address canΓÇÖt be retained in case of Alternate location re-protect (ALR) or Original location re-protect (OLR).
+ > - fstab, LVMconf would be changed.
++
+**Failure**
+
+- Any failed re-protect job can be retried. During retry, you can choose any healthy replication appliance.
+
+When you reprotect Azure machines to on-premises, you will be notified that you are failing back to the original location, or to an alternate location.
+
+- **Original location recovery**: This fails back from Azure to the same source on-premises machine if it exists. In this scenario, only changes are replicated back to on-premises.
+ - **Data store selection during OLR**: The data store attached to the source machine will be automatically selected.
+- **Alternate location recovery**: If the on-premises machine doesn't exist, you can fail back from Azure to an alternate location. When you reprotect the Azure VM to on-premises, the on-premises machine is created. Full data replication occurs from Azure to on-premises. [Review](concepts-types-of-failback.md) the requirements and limitations for location failback.
+ - **Data store selection during ALR**: Any data store managed by vCenter on which the appliance is situated and is accessible (read and write permissions) by the appliance can be chosen (original/new). You can choose cache storage account used for re-protection.
+
+- After failover is complete, mobility agent in the Azure VM will be registered with Site Recovery Services automatically. If registration fails, a critical health issue will be raised on the failed over VM. After issue is resolved, registration will be automatically triggered. You can manually complete the registration after resolving the errors.
++
+## Cancel failover
+
+If your on-premises environment is not ready or if you face any challenges, you can cancel the failover.
+
+Once you have initiated the planned failover and it completes successfully, your on-premises environment becomes available for usage. But after the completion of the operation, if you want to failover to a different recovery point, then you can cancel the failover.
++
+- Only planned failover can be canceled.
+
+- You can cancel a planned failover from the **Replicated items** page in your Recovery Services vault.
+
+- After the failover is canceled, your machines in Azure are turned back on, and replication once again starts from Azure to on-premises.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Failover VMware VMs to Azure (modernized)](vmware-azure-tutorial-failover-failback-modernized.md#run-a-failover-to-azure)
+
+> [!div class="nextstepaction"]
+> [Planned failover (modernized)](vmware-azure-tutorial-failover-failback-modernized.md#planned-failover-from-azure-to-on-premises)
+
+> [!div class="nextstepaction"]
+> [Re-protect (modernized)](vmware-azure-tutorial-failover-failback-modernized.md#re-protect-the-on-premises-machine-to-azure-after-successful-planned-failover)
+
+> [!div class="nextstepaction"]
+> [Cancel failover (modernized)](vmware-azure-tutorial-failover-failback-modernized.md#cancel-planned-failover)
++
site-recovery Failover Failback Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/failover-failback-overview.md
Last updated 06/30/2021
This article provides an overview of failover and failback during disaster recovery of on-premises machines to Azure with [Azure Site Recovery](site-recovery-overview.md) - Classic.
-For information about failover and failback in Azure Site Recovery Preview release, [see this article](failover-failback-overview-preview.md).
+For information about failover and failback in Azure Site Recovery Modernized release, [see this article](failover-failback-overview-modernized.md).
## Recovery stages
site-recovery How To Move From Classic To Modernized Vmware Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-move-from-classic-to-modernized-vmware-disaster-recovery.md
Last updated 07/15/2022
# How to move from classic to modernized VMware disaster recoveryΓÇ»
-This article provides information about how you can move/migrate your VMware replications from [classic](./vmware-azure-architecture.md) to [modernized](./vmware-azure-architecture-preview.md) protection architecture. With this capability to migrate, you can successfully transfer your replicated items from a configuration server to an Azure Site Recovery replication appliance. This migration is guided by a smart replication mechanism which ensures that the complete initial replication is not performed again for non-critical replicated items, and only the differential data is transferred.
+This article provides information about how you can move/migrate your VMware replications from [classic](./vmware-azure-architecture.md) to [modernized](./vmware-azure-architecture-modernized.md) protection architecture. With this capability to migrate, you can successfully transfer your replicated items from a configuration server to an Azure Site Recovery replication appliance. This migration is guided by a smart replication mechanism which ensures that the complete initial replication is not performed again for non-critical replicated items, and only the differential data is transferred.
> [!Note] > - Movement of physical servers to modernized architecture is not yet supported.  
site-recovery Hybrid How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hybrid-how-to-enable-replication-private-endpoints.md
Previously updated : 07/14/2020 Last updated : 09/21/2022 # Replicate on-premises machines by using private endpoints
Azure Site Recovery allows you to use [Azure Private Link](../private-link/priva
your on-premises machines to a virtual network in Azure. Private endpoint access to a recovery vault is supported in all Azure Commercial & Government regions.
+>[!Note]
+>Automatic upgrades are not supported for Private Endpoints. [Learn more](upgrade-mobility-service-modernized.md).
+ This article describes how to complete the following steps: - Create an Azure Backup Recovery Services vault to protect your machines.
to private IPs.
Now that you've enabled private endpoints for your virtual machine replication, see these other articles for additional and related information: -- [Deploy an on-premises configuration server](./vmware-azure-deploy-configuration-server.md)-- [Set up disaster recovery of on-premises Hyper-V VMs to Azure](./hyper-v-azure-tutorial.md)
+> [!div class="nextstepaction"]
+> [Deploy an on-premises configuration server](./vmware-azure-deploy-configuration-server.md)
+
+> [!div class="nextstepaction"]
+> [Set up disaster recovery of on-premises Hyper-V VMs to Azure](./hyper-v-azure-tutorial.md)
site-recovery Move From Classic To Modernized Vmware Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-from-classic-to-modernized-vmware-disaster-recovery.md
Last updated 07/15/2022
# Move from classic to modernized VMware disaster recovery  
-This article provides information about the architecture, necessary infrastructure, and FAQs about moving your VMware replications from [classic](./vmware-azure-architecture.md) to [modernized](./vmware-azure-architecture-preview.md) protection architecture. With this capability to migrate, you can successfully transfer your replicated items from a configuration server to an Azure Site Recovery replication appliance. This migration is guided by a smart replication mechanism, which ensures that complete initial replication isn't performed again for non-critical replicated items, and only the differential data is transferred.
+This article provides information about the architecture, necessary infrastructure, and FAQs about moving your VMware replications from [classic](./vmware-azure-architecture.md) to [modernized](./vmware-azure-architecture-modernized.md) protection architecture. With this capability to migrate, you can successfully transfer your replicated items from a configuration server to an Azure Site Recovery replication appliance. This migration is guided by a smart replication mechanism, which ensures that complete initial replication isn't performed again for non-critical replicated items, and only the differential data is transferred.
> [!Note] > - Movement of physical servers to modernized architecture is not yet supported.  
The components involved in the migration of replicated items of a VMware machine
Ensure the following for a successful movement of replicated item: - A Recovery Services vault using the modernized experience. ΓÇ» >[!Note]
- >Any new Recovery Services vault created will have the modernized experience switched on by default. You can [switch to the classic experience](./vmware-azure-common-questions.md#how-do-i-use-the-classic-experience-in-the-recovery-services-vault-rather-than-the-preview-experience) but once done, you canΓÇÖt switch again. ΓÇ»
-- An [Azure Site Recovery replication appliance](./deploy-vmware-azure-replication-appliance-preview.md), which has been successfully registered to the vault, and all its components are in a non-critical state.  
+ >Any new Recovery Services vault created will have the modernized experience switched on by default. You can [switch to the classic experience](./vmware-azure-common-questions.md#how-do-i-use-the-classic-experience-in-the-recovery-services-vault-rather-than-the-modernized-experience) but once done, you canΓÇÖt switch again. ΓÇ»
+- An [Azure Site Recovery replication appliance](./deploy-vmware-azure-replication-appliance-modernized.md), which has been successfully registered to the vault, and all its components are in a non-critical state.  
- The version of the appliance must be 9.50 or later. For a detailed version description, check [here](#architecture). - The vCenter server or vSphere host’s details, where the existing replicated machines reside, are added to the appliance for the on-premises discovery to be successful.  
Ensure the following for a successful movement of replicated item:
Ensure the following before you move from classic architecture to modernized architecture: -- [Create a Recovery Services vault](./azure-to-azure-tutorial-enable-replication.md#create-a-recovery-services-vault) and ensure the experience has [not been switched to classic](./vmware-azure-common-questions.md#how-do-i-use-the-classic-experience-in-the-recovery-services-vault-rather-than-the-preview-experience). -- [Deploy an Azure Site Recovery replication appliance](./deploy-vmware-azure-replication-appliance-preview.md). -- [Add the on-premises machine’s vCenter Server details](./deploy-vmware-azure-replication-appliance-preview.md) to the appliance, so that it successfully performs discovery.  
+- [Create a Recovery Services vault](./azure-to-azure-tutorial-enable-replication.md#create-a-recovery-services-vault) and ensure the experience has [not been switched to classic](./vmware-azure-common-questions.md#how-do-i-use-the-classic-experience-in-the-recovery-services-vault-rather-than-the-modernized-experience).
+- [Deploy an Azure Site Recovery replication appliance](./deploy-vmware-azure-replication-appliance-modernized.md).
+- [Add the on-premises machine’s vCenter Server details](./deploy-vmware-azure-replication-appliance-modernized.md) to the appliance, so that it successfully performs discovery.  
### Prepare classic Recovery Services vault  
The same formula will be used to calculate time for migration and is shown on th
## How to define required infrastructure
-When migrating machines from classic to modernized architecture, you will need to make sure that the required infrastructure has already been registered in the modernized Recovery Services vault. Refer to the replication applianceΓÇÖs [sizing and capacity details](./deploy-vmware-azure-replication-appliance-preview.md#sizing-and-capacity) to help define the required infrastructure.
+When migrating machines from classic to modernized architecture, you will need to make sure that the required infrastructure has already been registered in the modernized Recovery Services vault. Refer to the replication applianceΓÇÖs [sizing and capacity details](./deploy-vmware-azure-replication-appliance-modernized.md#sizing-and-capacity) to help define the required infrastructure.
As a rule, you should set up the same number of replication appliances, as the number of process servers in your classic Recovery Services vault. In the classic vault, if there was one configuration server and four process servers, then you should set up four replication appliances in the modernized Recovery Services vault.
site-recovery Physical Server Azure Architecture Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-server-azure-architecture-modernized.md
+
+ Title: Physical server to Azure disaster recovery architecture ΓÇô Modernized
+description: This article provides an overview of components and architecture used when setting up disaster recovery of on-premises Windows and Linux servers to Azure with Azure Site Recovery - Modernized
++ Last updated : 09/21/2022++
+# Physical server to Azure disaster recovery architecture ΓÇô Modernized
+
+This article describes the modernized architecture and processes used when you replicate, failover, and recover physical Windows and Linux servers between an on-premises site and Azure, using the [Azure Site Recovery](/azure/site-recovery/site-recovery-overview) service.
+
+For information about configuration server requirements in Classic releases, see [Physical server to Azure disaster recovery architecture](/azure/site-recovery/physical-azure-architecture).
+
+>[!Note]
+>Ensure you create a new Recovery Services vault for setting up the ASR replication appliance. Don't use an existing vault.
+
+## Architectural components
+
+The following table and graphic provide a high-level view of the components used for VMware VMs/physical machines disaster recovery to Azure.
++
+**Component** | **Requirement** | **Details**
+ | |
+**Azure** | An Azure subscription, Azure Storage account for cache, Managed Disk, and Azure network. | Replicated data from on-premises VMs is stored in Azure storage. Azure VMs are created with the replicated data when you run a failover from on-premises to Azure. The Azure VMs connect to the Azure virtual network when they're created.
+**Azure Site Recovery replication appliance** | This is the basic building block of the entire Azure Site Recovery on-premises infrastructure. <br/><br/> All components in the appliance coordinate with the replication appliance. This service oversees all end-to-end Site Recovery activities including monitoring the health of protected machines, data replication, automatic updates, etc. | The appliance hosts various crucial components like:<br/><br/>**Proxy server:** This component acts as a proxy channel between mobility agent and Site Recovery services in the cloud. It ensures there is no additional internet connectivity required from production workloads to generate recovery points.<br/><br/>**Discovered items:** This component gathers information of vCenter and coordinates with Azure Site Recovery management service in the cloud.<br/><br/>**Re-protection server:** This component coordinates between Azure and on-premises machines during reprotect and failback operations.<br/><br/>**Process server:** This component is used for caching, compression of data before being sent to Azure. <br/><br/> [Learn more](switch-replication-appliance-modernized.md) about replication appliance and how to use multiple replication appliances.<br/><br/>**Recovery Service agent:** This component is used for configuring/registering with Site Recovery services, and for monitoring the health of all the components.<br/><br/>**Site Recovery provider:** This component is used for facilitating re-protect. It identifies between alternate location re-protect and original location re-protect for a source machine. <br/><br/> **Replication service:** This component is used for replicating data from source location to Azure.
+**VMware servers** | VMware VMs are hosted on on-premises vSphere ESXi servers. We recommend a vCenter server to manage the hosts. | During Site Recovery deployment, you add VMware servers to the Recovery Services vault.
+**Replicated machines** | Mobility Service is installed on each VMware VM that you replicate. | We recommend that you allow automatic installation of the Mobility Service. Alternatively, you can install the [service manually](vmware-physical-mobility-service-overview.md#install-the-mobility-service-using-ui-modernized).
++
+## Set up outbound network connectivity
+
+For Site Recovery to work as expected, you need to modify outbound network connectivity to allow your environment to replicate.
+
+> [!NOTE]
+> Site Recovery doesn't support using an authentication proxy to control network connectivity.
+
+### Outbound connectivity for URLs
+
+If you're using a URL-based firewall proxy to control outbound connectivity, allow access to these URLs:
+
+| **URL** | **Details** |
+| - | -|
+| portal.azure.com | Navigate to the Azure portal. |
+| `*.windows.net `<br>`*.msftauth.net`<br>`*.msauth.net`<br>`*.microsoft.com`<br>`*.live.com `<br>`*.office.com ` | To sign-in to your Azure subscription. |
+|`*.microsoftonline.com `|Create Azure Active Directory (AD) apps for the appliance to communicate with Azure Site Recovery. |
+|management.azure.com |Create Azure AD apps for the appliance to communicate with the Azure Site Recovery service. |
+|`*.services.visualstudio.com `|Upload app logs used for internal monitoring. |
+|`*.vault.azure.net `|Manage secrets in the Azure Key Vault. Note: Ensure machines to replicate have access to this. |
+|aka.ms |Allow access to aka.ms links. Used for Azure Site Recovery appliance updates. |
+|download.microsoft.com/download |Allow downloads from Microsoft download. |
+|`*.servicebus.windows.net `|Communication between the appliance and the Azure Site Recovery service. |
+|`*.discoverysrv.windowsazure.com `|Connect to Azure Site Recovery discovery service URL. |
+|`*.hypervrecoverymanager.windowsazure.com `|Connect to Azure Site Recovery micro-service URLs |
+|`*.blob.core.windows.net `|Upload data to Azure storage which is used to create target disks |
+|`*.backup.windowsazure.com `|Protection service URL ΓÇô a microservice used by Azure Site Recovery for processing & creating replicated disks in Azure |
+
+## Replication process
+
+1. When you enable replication for a VM, initial replication to Azure storage begins, using the specified replication policy. Note the following:
+ - For VMware VMs, replication is block-level, near-continuous, using the Mobility service agent running on the VM.
+ - Any replication policy settings are applied:
+ - **RPO threshold**. This setting does not affect replication. It helps with monitoring. An event is raised, and optionally an email sent, if the current RPO exceeds the threshold limit that you specify.
+ - **Recovery point retention**. This setting specifies how far back in time you want to go when a disruption occurs. Maximum retention is 15 days.
+ - **App-consistent snapshots**. App-consistent snapshot can be taken every 1 to 12 hours, depending on your app needs. Snapshots are standard Azure blob snapshots. The Mobility agent running on a VM requests a VSS snapshot in accordance with this setting, and bookmarks that point-in-time as an application consistent point in the replication stream.
+ >[!NOTE]
+ >High recovery point retention period may have an implication on the storage cost since more recovery points may need to be saved.
+
+
+2. Traffic replicates to Azure storage public endpoints over the internet. Alternately, you can use Azure ExpressRoute with [Microsoft peering](../expressroute/expressroute-circuit-peerings.md#microsoftpeering). Replicating traffic over a site-to-site virtual private network (VPN) from an on-premises site to Azure isn't supported.
+3. Initial replication operation ensures that entire data on the machine at the time of enable replication is sent to Azure. After initial replication finishes, replication of delta changes to Azure begins. Tracked changes for a machine are sent to the process server.
+4. Communication happens as follows:
+
+ - VMs communicate with the on-premises appliance on port HTTPS 443 inbound, for replication management.
+ - The appliance orchestrates replication with Azure over port HTTPS 443 outbound.
+ - VMs send replication data to the process server on port HTTPS 9443 inbound. This port can be modified.
+ - The process server receives replication data, optimizes, and encrypts it, and sends it to Azure storage over port 443 outbound.
+5. The replication data logs first land in a cache storage account in Azure. These logs are processed and the data is stored in an Azure Managed Disk (called as *asrseeddisk*). The recovery points are created on this disk.
+
+## Failover and failback process
+
+After you set up replication and run a disaster recovery drill (test failover) to check that everything's working as expected, you can run failover and failback as you need to.
++
+1. You can run failover for a single machine or create a recovery plan to failover multiple servers simultaneously. The advantage of a recovery plan rather than single machine failover include:
+ - You can model app-dependencies by including all the servers across the app in a single recovery plan.
+ - You can add scripts, Azure runbooks, and pause for manual actions.
+2. After triggering the initial failover, you commit it to start accessing the workload from the Azure VM.
+3. When your primary on-premises site is available again, you can prepare for failback. If you need to failback large traffic volume, set up a new Azure Site Recovery replication appliance.
+
+ - Stage 1: Reprotect the Azure VMs to replicate from Azure back to the on-premises VMware VMs.
+ >[!Note]
+ >Failing back to physical servers is not supported. Thus, re-protection to only VMware VM will happen.
+ - Stage 2: Run a failback to the on-premises site.
+ - Stage 3: After workloads have failed back, you reenable replication for the on-premises VMs.
+
+>[!Note]
+>- To execute failback using the modernized architecture, you need not setup a process server, master target server or failback policy in Azure.
+>- Failback to physical machines is not supported. You must failback to a VMware site.
+
+## Resynchronization process
+
+1. At times, during initial replication or while transferring delta changes, there can be network connectivity issues between source machine to process server or between process server to Azure. Either of these can lead to failures in data transfer to Azure momentarily.
+2. To avoid data integrity issues, and minimize data transfer costs, Site Recovery marks a machine for resynchronization.
+3. A machine can also be marked for resynchronization in situations like following to maintain consistency between source machine and data stored in Azure
+ - If a machine undergoes force shut down
+ - If a machine undergoes configurational changes like disk resizing (modifying the size of disk from 2 TB to 4 TB)
+4. Resynchronization sends only delta data to Azure. Data transfer between on-premises and Azure by minimized by computing checksums of data between source machine and data stored in Azure.
+5. By default, resynchronization is scheduled to run automatically outside office hours. If you don't want to wait for default resynchronization outside hours, you can resynchronize a VM manually. To do this, go to Azure portal, select the VM > **Resynchronize**.
+6. If default resynchronization fails outside office hours and a manual intervention is required, then an error is generated on the specific machine in Azure portal. You can resolve the error and trigger the resynchronization manually.
+7. After completion of resynchronization, replication of delta changes will resume.
+
+## Replication policy
+
+When you enable Azure VM replication, by default Site Recovery creates a new replication policy with the default settings summarized in the table.
+
+**Policy setting** | **Details** | **Default**
+ | |
+**Recovery point retention** | Specifies how long Site Recovery keeps recovery points | 1 days
+**App-consistent snapshot frequency** | How often Site Recovery takes an app-consistent snapshot | Disabled
+
+## Snapshots and recovery points
+
+Recovery points are created from snapshots of VM disks taken at a specific point in time. When you fail over a VM, you use a recovery point to restore the VM in the target location.
+
+When failing over, we generally want to ensure that the VM starts with no corruption or data loss, and that the VM data is consistent for the operating system, and for apps that run on the VM. This depends on the type of snapshots taken.
+
+Site Recovery takes snapshots as follows:
+
+1. Site Recovery takes crash-consistent snapshots of data by default, and app-consistent snapshots if you specify a frequency for them.
+2. Recovery points are created from the snapshots and stored in accordance with retention settings in the replication policy.
+
+### Consistency
+
+The following table explains different types of consistency.
+
+### Crash-consistent
+
+**Description** | **Details** | **Recommendation**
+ | |
+A crash consistent snapshot captures data that was on the disk when the snapshot was taken. It doesn't include anything in memory.<br/><br/> It contains the equivalent of the on-disk data that would be present if the VM crashed or the power cord was pulled from the server at the instant that the snapshot was taken.<br/><br/> A crash-consistent doesn't guarantee data consistency for the operating system, or for apps on the VM. | Site Recovery creates crash-consistent recovery points every five minutes by default. This setting can't be modified.<br/><br/> | Today, most apps can recover well from crash-consistent points.<br/><br/> Crash-consistent recovery points are usually sufficient for the replication of operating systems, and apps such as DHCP servers and print servers.
+
+### App-consistent
+
+**Description** | **Details** | **Recommendation**
+ | |
+App-consistent recovery points are created from app-consistent snapshots.<br/><br/> An app-consistent snapshot contains all the information in a crash-consistent snapshot, plus all the data in memory and transactions in progress. | App-consistent snapshots use the Volume Shadow Copy Service (VSS):<br/><br/> 1) Azure Site Recovery uses Copy Only backup (VSS_BT_COPY) method which does not change Microsoft SQL's transaction log backup time and sequence number </br></br> 2) When a snapshot is initiated, VSS perform a copy-on-write (COW) operation on the volume.<br/><br/> 3) Before it performs the COW, VSS informs every app on the machine that it needs to flush its memory-resident data to disk.<br/><br/> 4) VSS then allows the backup/disaster recovery app (in this case Site Recovery) to read the snapshot data and proceed. | App-consistent snapshots are taken in accordance with the frequency you specify. This frequency should always be less than you set for retaining recovery points. For example, if you retain recovery points using the default setting of 24 hours, you should set the frequency at less than 24 hours.<br/><br/>They're more complex and take longer to complete than crash-consistent snapshots.<br/><br/> They affect the performance of apps running on a VM enabled for replication.
++
+## Next steps
+
+Follow [this tutorial](vmware-azure-tutorial.md) to enable VMware to Azure replication.
site-recovery Physical Server Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-server-enable-replication.md
+
+ Title: Enable replication for a physical server ΓÇô Modernized
+description: This article describes how to enable physical servers replication for disaster recovery using the Azure Site Recovery service
+++++ Last updated : 09/21/2022++
+# Enable replication for a physical server ΓÇô Modernized
+
+This article describes how to enable replication for on-premises physical servers for disaster recovery to Azure using the Azure Site Recovery service - Modernized.
+
+See the [tutorial](/azure/site-recovery/physical-azure-disaster-recovery) for information on how to set up disaster recovery in Azure Site Recovery Classic releases.
+
+This is the second tutorial in a series that shows you how to set up disaster recovery to Azure for on-premises physical servers. In the previous tutorial, we prepared the Azure Site Recovery replication appliance for disaster recovery to Azure.
+
+This tutorial, explains on how to enable replication for a physical server.
+
+## Get started
+
+Physical server to Azure replication includes the following procedures:
+
+- Sign in to the [Azure portal](https://ms.portal.azure.com/#home)
+- [Prepare Azure account](/azure/site-recovery/vmware-azure-set-up-replication-tutorial-preview#prepare-azure-account)
+- [Create a recovery Services vault](/azure/site-recovery/quickstart-create-vault-template?tabs=CLI)
+- [Prepare infrastructure](#prepare-infrastructureset-up-azure-site-recovery-replication-appliance)
+- [Enable replication](#enable-replication-for-physical-servers)
+
+## Prepare infrastructure - set up Azure Site Recovery Replication appliance
+
+[Set up an Azure Site Recovery replication appliance on the on-premises environment](/azure/site-recovery/deploy-vmware-azure-replication-appliance-preview) to channel mobility agent communications.
+
+### Add details of physical server to an appliance
+
+You can add details of the physical servers, which you plan to protect, when youΓÇÖre performing the appliance registration for the first time, or when the registration is complete. To add the physical server details to the appliance, follow the steps below:
+
+1. After adding the vCenter details, expand **Provide Physical server details** to add the details of the physical servers that you plan to protect.
+
+ :::image type="Physical server credentials." source="./media/physical-server-enable-replication/physical-server-credentials.png" alt-text="Screenshot of Physical server credentials.":::
+
+2. Select **Add credentials** to add credentials of the machine(s) you plan to protect. Add all the details such as the operating system, friendly name for the credentials, username, and password. The user account details will be encrypted and stored locally in the machine.
+
+ :::image type="Add Physical server credentials." source="./media/physical-server-enable-replication/add-physical-server-credentials.png" alt-text="Screenshot of Add Physical server credentials.":::
+
+3. Select **Add**.
+
+4. Select **Add server** to add physical server details. Provide the machineΓÇÖs IP address, select the machine's credentials and then select **Add**.
+
+ :::image type="Add Physical server details." source="./media/physical-server-enable-replication/add-physical-server-details.png" alt-text="Screenshot of Add Physical server details.":::
+
+This will add your physical server details to the appliance, and you can enable replication on these machines using any appliance which has healthy or warning status.
+
+To perform credential-less protection on physical servers, you must manually install the mobility service and enable replication. [Learn more](/azure/site-recovery/vmware-physical-mobility-service-overview#install-the-mobility-service-using-ui-modernized).
+
+## Enable replication for Physical servers
+
+Protect the machines, after an Azure Site Recovery replication appliance is added to a vault.
+
+Ensure that you meet the [pre-requisites](/azure/site-recovery/vmware-physical-azure-support-matrix) across storage and networking.
+
+Follow these steps to enable replication:
+
+1. Under **Getting Started**, select **Site Recovery**.
+
+2. Under **VMware**, select **Enable Replication** and select the machine type as Physical machines if you want to protect physical machines.
+Lists all the machines discovered by various appliances registered to the vault.
+
+ :::image type="Select source." source="./media/physical-server-enable-replication/select-source.png" alt-text="Screenshot of select source tab.":::
+
+3. Search the source machine name to protect it and review the selected machines. To review, select **Selected resources**.
+
+4. Select the desired machine and select **Next**. Source settings page opens.
+
+5. Select the replication appliance and machine credentials. These credentials will be used to push the mobility agent on the machine by the appliance to complete enabling Azure Site Recovery. Ensure to choose accurate credentials.
+
+ >[!Note]
+ >- For Linux OS, ensure to provide the root credentials.
+ >- For Windows OS, add a user account with admin privileges.
+ >- These credentials will be used to push install mobility service on to the source machine during enable replication operation.
+ >- You may be asked to provide a name for the virtual machine which will be created.
+
+ :::image type="Source settings." source="./media/physical-server-enable-replication/source-settings.png" alt-text="Screenshot of source settings tab.":::
+
+6. Select **Next** and provide target region properties.
+
+ By default, Vault subscription and Vault resource group are selected. You can choose a subscription and resource group of your choice. Your source machines will be deployed in this subscription and resource group when you failover in the future.
+
+ :::image type="Target properties." source="./media/physical-server-enable-replication/target-properties.png" alt-text="Screenshot of target properties tab.":::
+
+7. You can select an existing Azure network or create a new target network to be used during failover.
+
+ If you select **Create new**, you are redirected to **Create virtual network** blade. Provide address space and subnet details. This network will be created in the target subscription and target resource group selected in the previous step.
+
+8. Provide the test failover network details.
+
+ >[!Note]
+ >Ensure that the test failover network is different from the failover network. This ensures that the failover network is readily available in case of an actual disaster.
+
+9. Select the storage.
+
+ - **Cache storage account**: Choose the cache storage account that Azure Site Recovery uses for staging purposes ΓÇô caching and storing logs before writing the changes on to the managed disks.
+
+ Azure Site Recovery creates a new LRS v1 type storage account by default for the first enable replication operation in a vault. For the next operations, the same cache storage account will be re-used.
+
+ - **Managed disks**
+
+ By default, Standard HDD managed disks are created in Azure. Select **Customize** to customize the type of Managed disks. Choose the type of disk based on the business requirement. Ensure to [choose the appropriate disk type](/azure/virtual-machines/disks-types#disk-type-comparison) based on the IOPS of the source machine disks. For pricing information, see [managed disk pricing](/pricing/details/managed-disks/).
+
+ >[!Note]
+ >If Mobility Service is installed manually before enabling replication, you can change the type of managed disk, at a disk level. Otherwise, one managed disk type can be chosen at a machine level by default.
+
+10. Create a new replication policy if needed.
+
+ A default replication policy gets created under the vault within three days. Recovery point retention and app-consistent recovery points are disabled by default. You can create a new replication policy or modify the existing policy as per your RPO requirements.
+
+ 1. Select **Create new** and enter the **Name**.
+ 1. Enter a value ranging from 0 to 15 for the **Retention period (in days)**.
+ 1. Enable **App consistency frequency** if you wish and enter a value for **App-consistent snapshot frequency (in hours)** as per business requirements.
+ 1. Select **OK** to save the policy.
+
+ Use the policy to protect the chosen source machines.
+
+11. Choose the replication policy and select **Next**. Review the Source and Target properties and select **Enable Replication** to initiate the operation.
+
+ :::image type="Review." source="./media/physical-server-enable-replication/review.png" alt-text="Screenshot of review tab.":::
+
+ A job is created to enable replication of the selected machines. To track the progress, navigate to Site Recovery jobs in the recovery services vault.
+
site-recovery Quickstart Create Vault Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/quickstart-create-vault-template.md
Title: Quickstart to create an Azure Recovery Services vault using an Azure Resource Manager template. description: In this quickstart, you learn how to create an Azure Recovery Services vault using an Azure Resource Manager template (ARM template). Previously updated : 04/28/2021 Last updated : 09/21/2022
Azure virtual machines (VM), including replication, failover, and recovery.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
+To protect VMware or physical server, see [Modernized architecture](/azure/site-recovery/physical-server-azure-architecture-modernized).
+ If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal. [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.recoveryservices%2Frecovery-services-vault-create%2Fazuredeploy.json)
site-recovery Site Recovery Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-overview.md
Title: About Azure Site Recovery description: Provides an overview of the Azure Site Recovery service, and summarizes disaster recovery and migration deployment scenarios. Previously updated : 05/02/2022 Last updated : 09/21/2022
Site Recovery can manage replication for:
**Keep apps consistent over failover** | You can replicate using recovery points with application-consistent snapshots. These snapshots capture disk data, all data in memory, and all transactions in process. **Testing without disruption** | You can easily run disaster recovery drills, without affecting ongoing replication. **Flexible failovers** | You can run planned failovers for expected outages with zero-data loss. Or, unplanned failovers with minimal data loss, depending on replication frequency, for unexpected disasters. You can easily fail back to your primary site when it's available again.
-**Customized recovery plans** | Using recovery plans, you can customize and sequence the failover and recovery of multi-tier applications running on multiple VMs. You group machines together in a recovery plan, and optionally add scripts and manual actions. Recovery plans can be integrated with Azure automation runbooks.
+**Customized recovery plans** | Using recovery plans, you can customize and sequence the failover and recovery of multi-tier applications running on multiple VMs. You group machines together in a recovery plan, and optionally add scripts and manual actions. Recovery plans can be integrated with Azure Automation runbooks.
**BCDR integration** | Site Recovery integrates with other BCDR technologies. For example, you can use Site Recovery to protect the SQL Server backend of corporate workloads, with native support for SQL Server Always On, to manage the failover of availability groups. **Azure automation integration** | A rich Azure Automation library provides production-ready, application-specific scripts that can be downloaded and integrated with Site Recovery. **Network integration** | Site Recovery integrates with Azure for application network management. For example, to reserve IP addresses, configure load-balancers, and use Azure Traffic Manager for efficient network switchovers.
site-recovery Site Recovery Plan Capacity Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-plan-capacity-vmware.md
Last updated 08/19/2021
Use this article to plan for capacity and scaling when you replicate on-premises VMware VMs and physical servers to Azure by using [Azure Site Recovery](site-recovery-overview.md) - Classic.
-In preview, you need to [create and use Azure Site Recovery replication appliance/multiple appliances](deploy-vmware-azure-replication-appliance-preview.md) to plan capacity.
+In modernized, you need to [create and use Azure Site Recovery replication appliance/multiple appliances](deploy-vmware-azure-replication-appliance-modernized.md) to plan capacity.
## How do I start capacity planning?
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
> To setup the preview experience, you will have to perform a fresh setup and use a new Recovery Services vault. Updating from existing architecture to new architecture is unsupported. This public preview covers a complete overhaul of the current architecture for pretecting VMware machines.-- [Learn](./vmware-azure-architecture-preview.md) about the new architecture and the changes introduced.-- Check the pre-requisites and setup the ASR replication appliance by following [these steps](./deploy-vmware-azure-replication-appliance-preview.md).-- [Enable replication](./vmware-azure-set-up-replication-tutorial-preview.md) for your VMware machines.-- Check out the [automatic upgrade](./upgrade-mobility-service-preview.md) and [switch](./switch-replication-appliance-preview.md) capability for ASR replication appliance.
+- [Learn](/azure/site-recovery/vmware-azure-architecture-preview) about the new architecture and the changes introduced.
+- Check the pre-requisites and setup the ASR replication appliance by following [these steps](/azure/site-recovery/deploy-vmware-azure-replication-appliance-preview).
+- [Enable replication](/azure/site-recovery/vmware-azure-set-up-replication-tutorial-preview) for your VMware machines.
+- Check out the [automatic upgrade](/azure/site-recovery/upgrade-mobility-service-preview) and [switch](/azure/site-recovery/switch-replication-appliance-preview) capability for ASR replication appliance.
## Updates (July 2021)
site-recovery Switch Replication Appliance Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/switch-replication-appliance-modernized.md
+
+ Title: Switch replication appliance in Azure Site Recovery - Modernized
+description: This article describes show to switch between different replication appliances while replicating VMware VMs to Azure in Azure Site Recovery- Modernized
++ Last updated : 09/21/2022++
+# Switch Azure Site Recovery replication appliance
+
+>[!NOTE]
+> The information in this article applies to Azure Site Recovery - Modernized.
+
+You need to [create and deploy an on-premises Azure Site Recovery replication appliance](deploy-vmware-azure-replication-appliance-modernized.md) when you use [Azure Site Recovery](site-recovery-overview.md) for disaster recovery of VMware VMs and physical servers to Azure. For detailed information about replication appliance, see [the architecture](vmware-azure-architecture-modernized.md). You can create and use multiple replication appliances based on the capacity requirements of your organization.
+
+This article provides information about how you can switch between replication appliances.
+
+## Appliance resiliency
+
+Typically, in the classic architecture, if you need to maintain the resiliency of your configuration server then the recommended action is to take regular manual backups of the machine. This is a highly cumbersome process, also prone to errors and misses.
+
+This modernized application resilience introduces a better way to make your appliances more resilient. If your replication appliance burns down or you need to balance the machines running on an appliance, just spin up another replication appliance and switch all your machines to the new appliance.
++
+## Consideration for switching replication appliance
+
+You can switch replication appliance in the following scenarios:
+
+- You will need to perform a switch operation in case your current Azure Site Recovery replication appliance has burnt down, i.e., all its components have no heartbeat.
+ - An appliance is considered burnt down only if all its components have no heartbeat. Even if one of the components has a heartbeat, then the switch operation will be blocked.
+ - If your current appliance has burnt down, then you will need to again provide credentials to access the machines that you are trying to switch. If you are load-balancing and your current appliance is still in a non-critical state, then credentials will be auto-selected and you need not re-enter these while switching to a different appliance.
+- You might need to perform the switch operation in case you need to load balance your replication appliance.
+- If you are trying to perform a switch with an intent of balancing load on an appliance, then all the components of your current appliance should be either in healthy or warning state. Missing heartbeat of even one component will block the switch operation.
+- Ensure that the appliance that you're switching to is either in healthy or warning state, for the operation to succeed.
+- Only those machines that are replicating from on-premises to Azure, can be selected when performing the switch operation to another appliance.
++
+## Switch a replication appliance
+
+As an example, here is the scenario where replication appliance 1 (RA1) has become critical and you want to move the protected workloads to replication appliance 2 (RA2), which is in healthy state. Or, you want to switch the workloads under RA1 to RA2 for any load balancing or organization level changes.
+
+**Follow these steps to switch an appliance**:
+
+1. Go to **Site Recovery infrastructure** blade and select **ASR replication appliance**.
+
+ The list of available appliances and their health is displayed. For example, RA2 is healthy here.
+
+ ![Healthy replication appliances list](./media/switch-replication-appliance-modernized/appliance-health.png)
+
+2. Select the replication appliance (RA1) and select **Switch appliance**.
+
+ ![Select replication appliance to switch](./media/switch-replication-appliance-modernized/select-switch-appliance.png)
++
+3. Under **Select machines**, select the machines that you want to failover to another replication appliance (RA2). Select **Next**.
+
+ >[!NOTE]
+ > Only those machine which have been protected by the current appliance will be visible in the list. Failed over machines will not be present here
+
+ ![Select machines for switching](./media/switch-replication-appliance-modernized/select-machines.png)
+
+4. Under **Source settings**  page, for each of the selected machines, select a different replication appliance.
+
+ ![Source settings for replication appliance](./media/switch-replication-appliance-modernized/source-settings.png)
+
+ >[!NOTE]
+ > If your current appliance has burnt down, then you will be required to select the credentials to access the machines. Otherwise, the field will be disabled.
+
+5. Review the selection and then click **Switch appliance**.
+
+ ![review replication appliance](./media/switch-replication-appliance-modernized/review-switch-appliance.png)
+
+ Once the resync is complete, the replication status turns healthy for the VMs that are moved to a new appliance.
+
+## Next steps
+Set up disaster recovery of [VMware VMs](vmware-azure-set-up-replication-tutorial-modernized.md) to Azure.
site-recovery Upgrade Mobility Service Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/upgrade-mobility-service-modernized.md
+
+ Title: Upgrade Mobility Service and appliance components - Modernized
+description: This article describes about automatic updates for mobility agent and the procedure involved with manual updates - Modernized.
++ Last updated : 09/21/2022+++
+# Upgrade Mobility Service and Appliance components (Modernized)
+
+From this modernized mobility service and appliance components, you do not need to maintain source machine's Root/Admin credentials for performing upgrades. The credentials are required only for the initial installation of the agent on source machines. Once done, you can remove the credentials and the upgrades will occur automatically.
++
+## Update mobility agent automatically
+
+By default, automatic updates are enabled on a vault. Automatic updates will be triggered at 12:00 AM local time every day, if a new version is available.
+
+> [!NOTE]
+> If you are using private preview bits, automatic updates are blocked for the protected machines. Ensure that you setup Site Recovery on your machine again, using a fresh Azure Site Recovery replication appliance.
+
+To avail the latest features, enhancements, and fixes, we recommend you choose **Allow Site Recovery to manage** option on the **Mobility agent upgrade settings**. Automatic updates do not require a reboot or affect on-going replication of your virtual machines. Automatic updates also ensure that all the replication appliances in the vault are automatically updated.
+
+![Automatic updates on for Mobility agent](./media/upgrade-mobility-service-modernized/automatic-updates-on.png)
+
+To turn off the automatic updates, toggle the **Allow Site Recovery to manage** button.
+
+![Automatic updates off for mobility agent](./media/upgrade-mobility-service-modernized/automatic-updates-off.png)
++
+## Update mobility agent manually
+
+If you have turned off automatic updates for your mobility agent, you can update the agent manually using the following procedures:
+
+### Upgrade mobility agent on multiple protected items
+
+To manually update mobility agent on multiple protected items, follow these steps:
+
+1. Navigate to **Recovery services vault** > **Replicated items**, click *New Site Recovery mobility agent update is available*. Click to install.
+
+ ![Manual update of mobility agent on multiple protected items](./media/upgrade-mobility-service-modernized/agent-update.png)
+
+2. Choose the source machines to update and then click **OK**.
+
+ >[!NOTE]
+ >If prerequisites to upgrade Mobility service are not met, then the VM cannot be selected. See information on [how to resolve](#resolve-blocking-issues-for-agent-upgrade).
++
+4. After initiating the upgrade, a Site Recovery job is created in the vault for each upgrade operation, and can be tracked by navigating to **Monitoring** > **Site Recovery jobs**.
+
+### Update mobility agent for a single protected machine
+
+To update mobility agent of a protected item, follow these steps:
+1. Navigate to **recovery services vault** > **Replicated items**, select a VM.
+2. In VM's **Overview** blade, against **Agent version**, view the current version of the mobility agent. If a new update is available, the status is updated as **New update available**.
+
+ ![Manual update of mobility agent on a single protected items](./media/upgrade-mobility-service-modernized/agent-version.png)
+
+3. Click **New update available**, latest available version is displayed. Click **Update to this version** to initiate the update job.
+
+ ![mobility agent update details](./media/upgrade-mobility-service-modernized/agent-update-details.png)
+
+ > [!NOTE]
+ > If upgrade is blocked, check and resolve the errors as detailed [here](#resolve-blocking-issues-for-agent-upgrade).
+
+### Update mobility agent when private endpoint is enabled
+
+When you enable private endpoints, automatic updates will not be available. To update mobility agent of a protected item, follow these steps:
+
+1. Navigate toΓÇ»**Recovery services vault**ΓÇ»>ΓÇ»**Replicated items**, and select a VM.
+
+2. In VM's **Overview** blade, under **Agent version**, you can view the current version of the mobility agent. If a new update is available, the status is updated as **New update available**.
+
+3. Confirm the availability of new version, download the latest agent versionΓÇÖs package from [here](/azure/site-recovery/site-recovery-whats-new#supported-updates) on the source machine and update the agent version.
+
+### Update mobility agent on Windows machines
+
+To update mobility agent on Windows machines, follow these steps:
+
+1. Open command prompt and navigate to the folder where the update package has been placed.
+
+ `cd C:\Azure Site Recovery\Agent`
+
+2. To extract the update package, run the below command:
+
+ `.\Microsoft-ASR_UA*Windows*release.exe /q /x:C:\Azure Site Recovery\Agent`
+
+3. To proceed with the update, run the below command:
+
+ `.\UnifiedAgent.exe /Role "MS" /Platform VmWare /Silent /InstallationType Upgrade /CSType CSPrime /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery"`
+
+4. Registration will be triggered automatically after the agent has been updated. To manually check the status of registration, run the below command:
+
+ `"C:\Azure Site Recovery\Agent\agent\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime`
+
+#### Upgrade settings
+
+|Setting|Details|
+|||
+|Syntax| `.\UnifiedAgent.exe /Role "MS" /Platform vmware /Silent /InstallationType Upgrade /CSType CSPrime /InstallLocation "C:\Azure Site Recovery\Agent"`|
+|`/Role`|Mandatory update parameter. </br>Specifies that the Mobility service (MS) will be updated.|
+|`/InstallLocation`|Optional. </br>Specifies the Mobility service installation location.|
+|`/Platform`|Mandatory. </br>Specifies the platform on which the Mobility service is updated:</br>VmWare for VMware VMs/physical servers. </br>Azure for Azure VMs. </br></br>If you're treating Azure VMs as physical machines, specify VmWare.|
+|`/Silent`|Optional. </br>Specifies whether to run the installer in silent mode.|
+|`/CSType`|Mandatory. </br>Defines modernized or legacy architecture. (Use CSPrime)|
+
+#### Registration settings
+
+|Setting|Details|
+|||
+|Syntax|`"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime >`|
+|`/SourceConfigFilePath`|Mandatory. </br>Full file path of the Mobility Service configuration file. Use any valid folder.|
+|`/CSType`|Mandatory. </br>Defines modernized or legacy architecture. (CSPrime or CSLegacy).|
+
+### Update mobility agent on Linux machines
+
+To update mobility agent on Linux machines, follow these steps:
+
+1. From a terminal session, copy the update package to a local folder such as `/tmp` on the server for which the agent is being updated and run the below command:
+
+ `cd /tmp ;`
+ `tar -xvf Microsoft-ASR_UA_version_LinuxVersion_GA_date_release.tar.gz`
+
+2. To update, run the below command:
+
+ `./install -q -r MS -v VmWare -a Upgrade -c CSPrime`
+
+3. Registration will be triggered automatically after the agent has been updated. To manually check the status of registration, run the below command:
+
+ `<InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -c CSPrime -S config.json -q`
+
+#### Installation settings
+
+|Setting|Details|
+|||
+|Syntax|`./install -q -r MS -v VmWare -a Upgrade -c CSPrime`|
+|`-r`|Mandatory. </br>Installation parameter. </br>Specifies whether the Mobility service (MS) should be installed.|
+|`-d`|Optional. </br>Specifies the Mobility service installation location: `/usr/local/ASR`.|
+|`-v`|Mandatory. </br>Specifies the platform on which Mobility service is installed. </br>VMware for VMware VMs/physical servers. </br>Azure for Azure VMs.|
+|`-q`|Optional. </br>Specifies whether to run the installer in silent mode.|
+|`-c`|Mandatory. </br>Defines modernized or legacy architecture. (CSPrime or CSLegacy).|
+|`-a`|Mandatory. </br>Specifies that the mobility agent needs to be upgraded and not installed.|
+
+#### Registration settings
+
+|Setting|Details|
+|||
+|Syntax|`<InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -c CSPrime -S config.json -q`|
+|`-S`|Mandatory. </br>Full file path of the Mobility Service configuration file. Use any valid folder.|
+|`-c`|Mandatory. </br>Defines modernized or legacy architecture. (CSPrime or CSLegacy).|
+|`-q`|Optional. </br>Specifies whether to run the installer in silent mode.|
++
+## Mobility agent on latest version
+
+After Mobility agent is updated to the latest version or has been updated automatically to the latest version, the status displays as **Up to date**.
+
+### Resolve blocking issues for agent upgrade
+
+If prerequisites to upgrade the mobility agent are not met, then VM cannot be updated. Resolve these to proceed with the upgrade.
+
+The prerequisite includes, but not limited to:
+
+- A pending mandatory reboot on the protected machine.
+
+- If the replication appliance is on an incompatible version.
+
+- If the replication appliance components ΓÇô Proxy server or Process server is unable to communicate with Azure services.
+
+- If mobility agent on the protected machine is not able to communicate with the replication appliance.
+
+In case any of the above issues are applicable, the status is updated as **Cannot update to latest version**. Click the status to view the reasons blocking the update and recommended actions to fix the issue.
+
+>[!NOTE]
+>After resolving the blocking reasons, wait for 30 minutes to retry the operations. It takes time for the latest information to be updated in the Site Recovery services.
+
+### Mobility agent upgrade job failure
+
+In case mobility agent upgrade operation fails (manually triggered or automatic upgrade operation), the job is updated with the reason for failure. Resolve the errors and then retry the operation.
+
+To view the failure errors, you can either navigate to Site Recovery jobs, click a specific job to fetch the resolution of errors or you can use the steps below:
+
+1. Navigate to replicated items section, select a specific VM.
+
+2. In the **Overview** blade, against **Agent version**, the current version of the mobility agent displayed.
+
+3. Next to the current version, the status is updated with the message **Update failed**. Click the status to retry the update operation.
+
+4. A link to the previous upgrade job is available. Click the job to navigate to the specific job.
+
+5. Resolve the previous job errors.
+
+Trigger the update operation after resolving the errors from previous failed job.
+
+## Upgrade appliance
+
+By default, automatic updates are enabled on the appliance. Automatic updates are triggered at 12:00 AM local time every day, if a new version is available for any of the components.
+
+To check the update status of any of the components, navigate to appliance server and open **Microsoft Azure Appliance Configuration Manager**. Navigate to **Appliance components** and expand it to view the list of all the components and their version.
+
+If any of these need to be updated, then the **Status** reflects the same. Select the status message to upgrade the component.
+
+ ![replication appliance components](./media/upgrade-mobility-service-modernized/appliance-components.png)
+
+### Turn off auto-update
+
+1. On the server running the appliance, open the Registry Editor.
+2. Navigate to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance**.
+3. To turn off auto-update, create a registry key **AutoUpdate** key with DWORD value of 0.
+
+ ![Set registry key](./media/upgrade-mobility-service-modernized/registry-key.png)
++
+### Turn on auto-update
+
+You can turn on auto-update by deleting the AutoUpdate registry key from HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance.
+
+To delete the registry key:
+
+1. On the server running the appliance, open the Registry Editor.
+2. Navigate to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance**.
+3. Delete the registry key **AutoUpdate** that was previously created to turn off auto-update.
+
+### Update appliance components when private endpoint is enabled
+
+When you enable private endpoints, automatic updates will not be available. To update all the components of Azure Site Recovery replication appliance, follow these steps:
+
+1. Navigate to this page and check if a new version for the components has been released for a particular version.
+2. Download packages of all the versions for which an update is available on the appliance and update all the components.
+
+#### Update Process server
+
+1. To update the process server, download the latest version [here](/azure/site-recovery/site-recovery-whats-new#supported-updates).
+2. Download the update package to the Azure Site Recovery replication appliance.
+3. Open command prompt and navigate to the folder where the update package has been placed.
+
+ `cd C:\Downloads`
+
+4. To update the process server, run the below command:
+
+ `msiexec.exe /i ProcessServer.msi ALLUSERS=1 REINSTALL=ALL REINSTALLMODE=vomus /l*v msi.log`
+
+#### Update Recovery Services agent
+
+To update the Recovery Service agent, download the latest version [here](/azure/site-recovery/site-recovery-whats-new#supported-updates).
+
+1. Download the update package to the Azure Site Recovery replication appliance.
+2. Open command prompt and navigate to the folder where the update package has been placed.
+
+ `cd C:\Downloads`
+
+3. To update the Recovery Service agent, run the below command:
+
+ `MARSAgentInstaller.exe /q /nu - for mars agent`
+
+#### Update remaining components of appliance
+
+1. To update the remaining components of the Azure Site Recovery replication appliance, download the latest version [here](/azure/site-recovery/site-recovery-whats-new#supported-updates).
+2. Open the downloaded `.msi` file which triggers the update automatically.
+3. Check the latest version in Windows settings > **Add or remove program**.
+
+### Resolve issues with component upgrade
+
+If prerequisites to upgrade any of the components are not met, then it cannot be updated. The reasons/prerequisites include, but not limited to,
+
+- If one of the components of the replication appliance is on an incompatible version.
+
+- If replication appliance is unable to communicate with Azure services.
+
+In case any of the above issues are applicable, the status is updated as **Cannot update to latest version**. Select the status to view the reasons blocking the update and recommended actions to fix the issue. After resolving the blocking reasons, try the update manually.
site-recovery Upgrade Mobility Service Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/upgrade-mobility-service-preview.md
- Title: Upgrade Mobility Service and appliance components - preview
-description: This article describes about automatic updates for mobility agent and the procedure involved with manual updates - preview.
-- Previously updated : 09/01/2021---
-# Upgrade Mobility Service and Appliance components (Preview)
-
-From this preview, you do not need to maintain source machine's Root/Admin credentials for performing upgrades. The credentials are required only for the initial installation of the agent on source machines. Once done, you can remove the credentials and the upgrades will occur automatically.
--
-## Update mobility agent automatically
-
-By default, automatic updates are enabled on a vault. Automatic updates will be triggered at 12:00 AM local time every day, if a new version is available.
-
-> [!NOTE]
-> If you are using private preview bits, automatic updates are blocked for the protected machines. Ensure that you setup Site Recovery on your machine again, using a fresh Preview appliance.
-
-To avail the latest features, enhancements and fixes, we recommend you to choose **Allow Site Recovery to manage** option on the **Mobility agent upgrade settings (Preview)**. Automatic updates do not require a reboot or affect on-going replication of your virtual machines. Automatic updates also ensure that all the replication appliances in the vault are automatically updated.
-
-![Automatic updates on for Mobility agent](./media/upgrade-mobility-service-preview/automatic-updates-on.png)
-
-To turn off the automatic updates, toggle the **Allow Site Recovery to manage** button.
-
-![Automatic updates off for mobility agent](./media/upgrade-mobility-service-preview/automatic-updates-off.png)
--
-## Update mobility agent manually
-
-If you have turned off automatic updates for your mobility agent, you can update the agent manually using the following procedures:
-
-### Upgrade mobility agent on multiple protected items
-
-To manually update mobility agent on multiple protected items, follow these steps:
-
-1. Navigate to **Recovery services vault** > **Replicated items** , click *New Site Recovery mobility agent update is available*. Click to install.
-
- ![Manual update of mobility agent on multiple protected items](./media/upgrade-mobility-service-preview/agent-update.png)
-
-2. Choose the source machines to update and then click **OK**.
-
- >[!NOTE]
- >If prerequisites to upgrade Mobility service are not met, then the VM cannot be selected. See information on [how to resolve](#resolve-blocking-issues-for-agent-upgrade).
--
-4. After initiating the upgrade, a Site Recovery job is created in the vault for each upgrade operation and can be tracked by navigating to **Monitoring** > **Site Recovery jobs**.
-
-### Update mobility agent for a single protected machine
-
-To update mobility agent of a protected item, follow these steps:
-1. Navigate to **recovery services vault** > **Replicated items** , select a VM.
-2. In VM's **Overview** blade, against **Agent version**, view the current version of the mobility agent. If a new update is available, the status is updated as **New update available**.
-
- ![Manual update of mobility agent on a single protected items](./media/upgrade-mobility-service-preview/agent-version.png)
-
-3. Click **New update available**, latest available version is displayed. Click **Update to this version** to initiate the update job.
-
- ![mobility agent update details](./media/upgrade-mobility-service-preview/agent-update-details.png)
-
- > [!NOTE]
- > If upgrade is blocked, check and resolve the errors as detailed [here](#resolve-blocking-issues-for-agent-upgrade).
-
-## Mobility agent on latest version
-
-After Mobility agent is updated to the latest version or has been updated automatically to the latest version, the status displays as **Up to date**.
-
-### Resolve blocking issues for agent upgrade
-
-If prerequisites to upgrade the mobility agent are not met, then VM cannot be updated. Resolve these to proceed with the upgrade.
-
-The prerequisite includes, but not limited to:
--- A pending mandatory reboot on the protected machine.--- If the replication appliance is on an incompatible version.--- If the replication appliance components ΓÇô Proxy server or Process server is unable to communicate with Azure services.--- If mobility agent on the protected machine is not able to communicate with the replication appliance.-
-In case any of the above issues are applicable, the status is updated as **Cannot update to latest version**. Click the status to view the reasons blocking the update and recommended actions to fix the issue.
-
->[!NOTE]
->After resolving the blocking reasons, wait for 30 minutes to retry the operations. It takes time for the latest information to be updated in the Site Recovery services.
-
-### Mobility agent upgrade job failure
-
-In case mobility agent upgrade operation fails (manually triggered or automatic upgrade operation), the job is updated with the reason for failure. Resolve the errors and then retry the operation.
-
-To view the failure errors, you can either navigate to Site Recovery jobs, click a specific job to fetch the resolution of errors. Or, you can use the steps below:
-
-1. Navigate to replicated items section, select a specific VM.
-
-2. In the **Overview** blade, against **Agent version**, the current version of the mobility agent displayed.
-
-3. Next to the current version, the status is updated with the message **Update failed**. Click the status to retry the update operation.
-
-4. A link to the previous upgrade job is available. Click the job to navigate to the specific job.
-
-5. Resolve the previous job errors.
-
-Trigger the update operation after resolving the errors from previous failed job.
-
-## Upgrade appliance
-
-By default, automatic updates are enabled on the appliance. Automatic updates are triggered at 12:00 AM local time every day, if a new version is available for any of the components.
-
-To check the update status of any of the components, navigate to appliance server and open **Microsoft Azure Appliance Configuration Manager**. Navigate to **Appliance components** and expand it to view the list of all the components and their version.
-
-If any of these need to be updated, then the **Status** reflects the same. Click the status message to upgrade the component.
-
- ![replication appliance components](./media/upgrade-mobility-service-preview/appliance-components.png)
-
-### Turn off auto-update
-
-1. On the server running the appliance, open the Registry Editor.
-2. Navigate to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance**.
-3. To turn off auto-update, create a registry key **AutoUpdate** key with DWORD value of 0.
-
- ![Set registry key](./media/upgrade-mobility-service-preview/registry-key.png)
--
-### Turn on auto-update
-
-You can turn on auto-update by deleting the AutoUpdate registry key from HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance.
-
-To delete the registry key:
-
-1. On the server running the appliance, open the Registry Editor.
-2. Navigate to **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance**.
-3. Delete the registry key **AutoUpdate**, that was previously created to turn off auto-update.
-
-### Resolve issues with component upgrade
-
-If prerequisites to upgrade any of the components are not met, then it cannot be updated. The reasons/prerequisites include, but not limited to,
--- If one of the components of the replication appliance is on an incompatible version.--- If replication appliance is unable to communicate with Azure services.-
-In case any of the above issues are applicable, the status is updated as **Cannot update to latest version**. Click the status to view the reasons blocking the update and recommended actions to fix the issue. After resolving the blocking reasons, try the update manually.
site-recovery Vmware Azure About Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-about-disaster-recovery.md
After you have your Azure and on-premises infrastructure in place, you can set u
1. To understand the components that you'll need to deploy, review the [VMware to Azure architecture](vmware-azure-architecture.md), and the [physical to Azure architecture](physical-azure-architecture.md). There are a number of components, so it's important to understand how they all fit together. 2. **Source environment**: As a first step in deployment, you set up your replication source environment. You specify what you want to replicate, and where you want to replicate to.
-3. **Configuration server** (applicable to Classic): You need to set up a configuration server in your on-premises source environment:
+3. **Configuration server** (applicable for Classic): You need to set up a configuration server in your on-premises source environment:
- The configuration server is a single on-premises machine. For VMware disaster recovery, we recommend that you deploy it as a VMware VM that can be deployed from a downloadable OVF template. - The configuration server coordinates communications between on-premises and Azure - A couple of other components run on the configuration server machine. - The process server receives, optimizes, and sends replication data to cache storage account in Azure. It also handles automatic installation of the Mobility service on machines you want to replicate, and performs automatic discovery of VMs on VMware servers. - The master target server handles replication data during failback from Azure. - Set up includes registering the configuration server in the vault, downloading MySQL Server and VMware PowerCLI, and specifying the accounts created for automatic discovery and Mobility service installation.
-4. **Azure Site Recovery replication appliance** (applicable for Preview): You need to set up a replication appliance in your on-premises source environment. The appliance is the basic building block of the entire Azure Site Recovery on-premises infrastructure. For VMware disaster recovery, we recommend that [you deploy it as a VMware VM](deploy-vmware-azure-replication-appliance-preview.md#create-azure-site-recovery-replication-appliance) that can be deployed from a downloadable OVF template. Learn more about replication appliance [here](vmware-azure-architecture-preview.md).
+4. **Azure Site Recovery replication appliance** (applicable for modernized): You need to set up a replication appliance in your on-premises source environment. The appliance is the basic building block of the entire Azure Site Recovery on-premises infrastructure. For VMware disaster recovery, we recommend that [you deploy it as a VMware VM](deploy-vmware-azure-replication-appliance-modernized.md#create-azure-site-recovery-replication-appliance) that can be deployed from a downloadable OVF template. Learn more about replication appliance [here](vmware-azure-architecture-modernized.md).
5. **Target environment**: You set up your target Azure environment by specifying your Azure subscription and network settings. 6. **Replication policy**: You specify how replication should occur. Settings include how often recovery points are created and stored, and whether app-consistent snapshots should be created. 7. **Enable replication**. You enable replication for on-premises machines. If you created an account to install the Mobility service, then it will be installed when you enable replication for a machine.
site-recovery Vmware Azure Architecture Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-architecture-modernized.md
+
+ Title: VMware VM disaster recovery architecture in Azure Site Recovery - Modernized
+description: This article provides an overview of components and architecture used when setting up disaster recovery of on-premises VMware VMs to Azure with Azure Site Recovery - Modernized
++ Last updated : 09/21/2022++
+# VMware to Azure disaster recovery architecture - Modernized
+
+This article describes the architecture and processes used when you deploy disaster recovery replication, failover, and recovery of VMware virtual machines (VMs) between an on-premises VMware site and Azure using the Modernized VMware/Physical machine protection experience.
+
+>[!NOTE]
+> Ensure you create a new Recovery Services vault for setting up the ASR replication appliance. Don't use an existing vault.
+
+For information about Azure Site Recovery architecture in Classic architecture, see [this article](vmware-azure-architecture.md).
++
+## Architectural components
+
+The following table and graphic provide a high-level view of the components used for VMware VMs/physical machines disaster recovery to Azure.
+
+[![VMware to Azure architecture](./media/vmware-azure-architecture-modernized/architecture-modernized.png)](./media/vmware-azure-architecture-modernized/architecture-modernized.png#lightbox)
+
+**Component** | **Requirement** | **Details**
+ | |
+**Azure** | An Azure subscription, Azure Storage account for cache, Managed Disk, and Azure network. | Replicated data from on-premises VMs is stored in Azure storage. Azure VMs are created with the replicated data when you run a failover from on-premises to Azure. The Azure VMs connect to the Azure virtual network when they're created.
+**Azure Site Recovery replication appliance** | This is the basic building block of the entire Azure Site Recovery on-premises infrastructure. <br/><br/> All components in the appliance coordinate with the replication appliance. This service oversees all end-to-end Site Recovery activities including monitoring the health of protected machines, data replication, automatic updates, etc. | The appliance hosts various crucial components like:<br/><br/>**Proxy server:** This component acts as a proxy channel between mobility agent and Site Recovery services in the cloud. It ensures there is no additional internet connectivity required from production workloads to generate recovery points.<br/><br/>**Discovered items:** This component gathers information of vCenter and coordinates with Azure Site Recovery management service in the cloud.<br/><br/>**Re-protection server:** This component coordinates between Azure and on-premises machines during reprotect and failback operations.<br/><br/>**Process server:** This component is used for caching, compression of data before being sent to Azure. <br/><br/> [Learn more](switch-replication-appliance-modernized.md) about replication appliance and how to use multiple replication appliances.<br/><br/>**Recovery Service agent:** This component is used for configuring/registering with Site Recovery services, and for monitoring the health of all the components.<br/><br/>**Site Recovery provider:** This component is used for facilitating re-protect. It identifies between alternate location re-protect and original location re-protect for a source machine. <br/><br/> **Replication service:** This component is used for replicating data from source location to Azure.
+**VMware servers** | VMware VMs are hosted on on-premises vSphere ESXi servers. We recommend a vCenter server to manage the hosts. | During Site Recovery deployment, you add VMware servers to the Recovery Services vault.
+**Replicated machines** | Mobility Service is installed on each VMware VM that you replicate. | We recommend that you allow automatic installation of the Mobility Service. Alternatively, you can install the [service manually](vmware-physical-mobility-service-overview.md#install-the-mobility-service-using-ui-modernized).
++
+## Set up outbound network connectivity
+
+For Site Recovery to work as expected, you need to modify outbound network connectivity to allow your environment to replicate.
+
+> [!NOTE]
+> Site Recovery doesn't support using an authentication proxy to control network connectivity.
+
+### Outbound connectivity for URLs
+
+If you're using a URL-based firewall proxy to control outbound connectivity, allow access to these URLs:
+
+| **URL** | **Details** |
+| - | -|
+| portal.azure.com | Navigate to the Azure portal. |
+| `*.windows.net `<br>`*.msftauth.net`<br>`*.msauth.net`<br>`*.microsoft.com`<br>`*.live.com `<br>`*.office.com ` | To sign-in to your Azure subscription. |
+|`*.microsoftonline.com `|Create Azure Active Directory (AD) apps for the appliance to communicate with Azure Site Recovery. |
+|management.azure.com |Create Azure AD apps for the appliance to communicate with the Azure Site Recovery service. |
+|`*.services.visualstudio.com `|Upload app logs used for internal monitoring. |
+|`*.vault.azure.net `|Manage secrets in the Azure Key Vault. Note: Ensure machines to replicate have access to this. |
+|aka.ms |Allow access to aka links. Used for Azure Site Recovery appliance updates. |
+|download.microsoft.com/download |Allow downloads from Microsoft download. |
+|`*.servicebus.windows.net `|Communication between the appliance and the Azure Site Recovery service. |
+|`*.discoverysrv.windowsazure.com `|Connect to Azure Site Recovery discovery service URL. |
+|`*.hypervrecoverymanager.windowsazure.com `|Connect to Azure Site Recovery micro-service URLs |
+|`*.blob.core.windows.net `|Upload data to Azure storage which is used to create target disks |
+|`*.backup.windowsazure.com `|Protection service URL ΓÇô a microservice used by Azure Site Recovery for processing & creating replicated disks in Azure |
+++
+## Replication process
+
+1. When you enable replication for a VM, initial replication to Azure storage begins, using the specified replication policy. Note the following:
+ - For VMware VMs, replication is block-level, near-continuous, using the Mobility service agent running on the VM.
+ - Any replication policy settings are applied:
+ - **RPO threshold**. This setting does not affect replication. It helps with monitoring. An event is raised, and optionally an email sent, if the current RPO exceeds the threshold limit that you specify.
+ - **Recovery point retention**. This setting specifies how far back in time you want to go when a disruption occurs. Maximum retention is 15 days.
+ - **App-consistent snapshots**. App-consistent snapshot can be taken every 1 to 12 hours, depending on your app needs. Snapshots are standard Azure blob snapshots. The Mobility agent running on a VM requests a VSS snapshot in accordance with this setting, and bookmarks that point-in-time as an application consistent point in the replication stream.
+ >[!NOTE]
+ >High recovery point retention period may have an implication on the storage cost since more recovery points may need to be saved.
+
+
+2. Traffic replicates to Azure storage public endpoints over the internet. Alternately, you can use Azure ExpressRoute with [Microsoft peering](../expressroute/expressroute-circuit-peerings.md#microsoftpeering). Replicating traffic over a site-to-site virtual private network (VPN) from an on-premises site to Azure isn't supported.
+3. Initial replication operation ensures that entire data on the machine at the time of enable replication is sent to Azure. After initial replication finishes, replication of delta changes to Azure begins. Tracked changes for a machine are sent to the process server.
+4. Communication happens as follows:
+
+ - VMs communicate with the on-premises appliance on port HTTPS 443 inbound, for replication management.
+ - The appliance orchestrates replication with Azure over port HTTPS 443 outbound.
+ - VMs send replication data to the process server on port HTTPS 9443 inbound. This port can be modified.
+ - The process server receives replication data, optimizes, and encrypts it, and sends it to Azure storage over port 443 outbound.
+5. The replication data logs first land in a cache storage account in Azure. These logs are processed, and the data is stored in an Azure Managed Disk (called as *asrseeddisk*). The recovery points are created on this disk.
+
+## Resynchronization process
+
+1. At times, during initial replication or while transferring delta changes, there can be network connectivity issues between source machine to process server or between process server to Azure. Either of these can lead to failures in data transfer to Azure momentarily.
+2. To avoid data integrity issues, and minimize data transfer costs, Site Recovery marks a machine for resynchronization.
+3. A machine can also be marked for resynchronization in situations like following to maintain consistency between source machine and data stored in Azure
+ - If a machine undergoes force shut down
+ - If a machine undergoes configurational changes like disk resizing (modifying the size of disk from 2 TB to 4 TB)
+4. Resynchronization sends only delta data to Azure. Data transfer between on-premises and Azure by minimized by computing checksums of data between source machine and data stored in Azure.
+5. By default, resynchronization is scheduled to run automatically outside office hours. If you don't want to wait for default resynchronization outside hours, you can resynchronize a VM manually. To do this, go to Azure portal, select the VM > **Resynchronize**.
+6. If default resynchronization fails outside office hours and a manual intervention is required, then an error is generated on the specific machine in Azure portal. You can resolve the error and trigger the resynchronization manually.
+7. After completion of resynchronization, replication of delta changes will resume.
+
+## Replication policy
+
+When you enable Azure VM replication, by default Site Recovery creates a new replication policy with the default settings summarized in the table.
+
+**Policy setting** | **Details** | **Default**
+ | |
+**Recovery point retention** | Specifies how long Site Recovery keeps recovery points | 1 day
+**App-consistent snapshot frequency** | How often Site Recovery takes an app-consistent snapshot | Disabled
+
+### Managing replication policies
+
+You can manage and modify the default replication policies settings as follows:
+- You can modify the settings as you enable replication.
+- You can create or edit new replication policy while trying to enable replication.
+
+### Multi-VM consistency
+
+If you want VMs to replicate together and have shared crash-consistent and app-consistent recovery points at failover, you can gather them together into a replication group. Multi-VM consistency impacts workload performance and should only be used for VMs 4 workloads that need consistency across all machines.
+++
+## Snapshots and recovery points
+
+Recovery points are created from snapshots of VM disks taken at a specific point in time. When you fail over a VM, you use a recovery point to restore the VM in the target location.
+
+When failing over, we generally want to ensure that the VM starts with no corruption or data loss, and that the VM data is consistent for the operating system, and for apps that run on the VM. This depends on the type of snapshots taken.
+
+Site Recovery takes snapshots as follows:
+
+1. Site Recovery takes crash-consistent snapshots of data by default, and app-consistent snapshots if you specify a frequency for them.
+2. Recovery points are created from the snapshots and stored in accordance with retention settings in the replication policy.
+
+### Consistency
+
+The following table explains different types of consistency.
+
+### Crash-consistent
+
+**Description** | **Details** | **Recommendation**
+ | |
+A crash consistent snapshot captures data that was on the disk when the snapshot was taken. It doesn't include anything in memory.<br/><br/> It contains the equivalent of the on-disk data that would be present if the VM crashed or the power cord was pulled from the server at the instant that the snapshot was taken.<br/><br/> A crash-consistent doesn't guarantee data consistency for the operating system, or for apps on the VM. | Site Recovery creates crash-consistent recovery points every five minutes by default. This setting can't be modified.<br/><br/> | Today, most apps can recover well from crash-consistent points.<br/><br/> Crash-consistent recovery points are usually sufficient for the replication of operating systems, and apps such as DHCP servers and print servers.
+
+### App-consistent
+
+**Description** | **Details** | **Recommendation**
+ | |
+App-consistent recovery points are created from app-consistent snapshots.<br/><br/> An app-consistent snapshot contain all the information in a crash-consistent snapshot, plus all the data in memory and transactions in progress. | App-consistent snapshots use the Volume Shadow Copy Service (VSS):<br/><br/> 1) Azure Site Recovery uses Copy Only backup (VSS_BT_COPY) method which does not change Microsoft SQL's transaction log backup time and sequence number </br></br> 2) When a snapshot is initiated, VSS perform a copy-on-write (COW) operation on the volume.<br/><br/> 3) Before it performs the COW, VSS informs every app on the machine that it needs to flush its memory-resident data to disk.<br/><br/> 4) VSS then allows the backup/disaster recovery app (in this case Site Recovery) to read the snapshot data and proceed. | App-consistent snapshots are taken in accordance with the frequency you specify. This frequency should always be less than you set for retaining recovery points. For example, if you retain recovery points using the default setting of 24 hours, you should set the frequency at less than 24 hours.<br/><br/>They're more complex and take longer to complete than crash-consistent snapshots.<br/><br/> They affect the performance of apps running on a VM enabled for replication.
+
+## Failover and failback process
+
+After replication is set up and you run a disaster recovery drill (test failover) to check that everything's working as expected, you can run failover and failback as you need to.
+
+1. You can run fail over for a single machine or create a recovery plan to fail over multiple VMs at the same time. The advantage of a recovery plan rather than single machine failover include:
+ - You can model app-dependencies by including all the VMs across the app in a single recovery plan.
+ - You can add scripts, Azure runbooks, and pause for manual actions.
+2. After triggering the initial failover, you commit it to start accessing the workload from the Azure VM.
+3. When your primary on-premises site is available again, you can prepare for fail back. If you need to fail back large volumes of traffic, set up a new Azure Site Recovery replication appliance.
+
+ - Stage 1: Reprotect the Azure VMs so that they replicate from Azure back to the on-premises VMware VMs.
+ - Stage 2: Run a failover to the on-premises site.
+ - Stage 3: After workloads have failed back, you reenable replication for the on-premises VMs.
+
+## Next steps
+
+Follow [this tutorial](vmware-azure-tutorial.md) to enable VMware to Azure replication.
site-recovery Vmware Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-architecture.md
Last updated 08/19/2021
This article describes the architecture and processes used when you deploy disaster recovery replication, failover, and recovery of VMware virtual machines (VMs) between an on-premises VMware site and Azure using the [Azure Site Recovery](site-recovery-overview.md) service - Classic.
-For architecture details in Preview, [see this article](vmware-azure-architecture-preview.md)
+For details about modernized architecture, [see this article](vmware-azure-architecture-modernized.md)
## Architectural components
The following table and graphic provide a high-level view of the components used
For Site Recovery to work as expected, you need to modify outbound network connectivity to allow your environment to replicate. > [!NOTE]
-> Site Recovery of VMware/Physical machines using Classic architecture doesn't support using an authentication proxy to control network connectivity. The same is supported when using the [modernized architecutre](vmware-azure-architecture-preview.md).
+> Site Recovery of VMware/Physical machines using Classic architecture doesn't support using an authentication proxy to control network connectivity. The same is supported when using the [modernized architecutre](vmware-azure-architecture-modernized.md).
### Outbound connectivity for URLs
site-recovery Vmware Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-common-questions.md
This article answers common questions that might come up when you deploy disaste
## General
-### How do I use the classic experience in the Recovery Services vault rather than the preview experience?
+### How do I use the classic experience in the Recovery Services vault rather than the modernized experience?
++
+A new and more reliable way to protect VMware virtual machines using the Azure Site Recovery replication appliance is now generally available. When a new Recovery Services vault is created, by default the modernized experience will be selected.
-A new and more reliable way to protect VMware virtual machines using the Azure Site Recovery replication appliance is now in [public preview](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094). When a new Recovery Services vault is created, by default the preview experience will be selected.
To change the experience -
To change the experience -
[![Modify VMware stack step 2](./media/vmware-azure-common-questions/change-stack-step-2.png)](./media/vmware-azure-common-questions/change-stack-step-2.png#lightbox) > [!NOTE]
-> Note that once the experience type has been switched to classic from preview, it cannot be switched again in the same Recovery Services vault. Ensure that the desired experience is selected, before saving this change.
+> Note that once the experience type has been switched to classic from modernized, it cannot be switched again in the same Recovery Services vault. Ensure that the desired experience is selected, before saving this change.
### What do I need for VMware VM disaster recovery?
site-recovery Vmware Azure Configuration Server Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-configuration-server-requirements.md
Last updated 08/19/2021
# Configuration server requirements for VMware disaster recovery to Azure >[!NOTE]
-> The information in this article applies to Azure Site Recovery Classic releases. In Preview, for replication of VMs, you need to create and use Azure site Recovery replication appliance. For detailed information about the requirements of replication appliance and how to deploy, [see this article](deploy-vmware-azure-replication-appliance-preview.md).
+> The information in this article applies to Azure Site Recovery Classic releases. In Modernized, for replication of VMs, you need to create and use Azure site Recovery replication appliance. For detailed information about the requirements of replication appliance and how to deploy, [see this article](deploy-vmware-azure-replication-appliance-modernized.md).
You deploy an on-premises configuration server when you use [Azure Site Recovery](site-recovery-overview.md) for disaster recovery of VMware VMs and physical servers to Azure.
site-recovery Vmware Azure Set Up Replication Tutorial Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-replication-tutorial-modernized.md
+
+ Title: Set up VMware VM disaster recovery to Azure with Azure Site Recovery - Modernized
+description: Learn how to set up disaster recovery to Azure for on-premises VMware VMs with Azure Site Recovery - Modernized.
++ Last updated : 09/21/2022+++
+# Set up disaster recovery to Azure for on-premises VMware VMs - Modernized
+
+This article describes how to enable replication for on-premises VMware VMs, for disaster recovery to Azure using the Modernized VMware/Physical machine protection experience.
+
+For information on how to set up disaster recovery in Azure Site Recovery Classic releases, see [the tutorial](vmware-azure-tutorial.md).
+
+This is the second tutorial in a series that shows you how to set up disaster recovery to Azure for on-premises VMware VMs. In the previous tutorial, we [prepared the on-premises Azure Site Recovery replication appliance](deploy-vmware-azure-replication-appliance-modernized.md) for disaster recovery to Azure.
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Set up the source replication settings.
+> * Set up the replication target settings.
+> * Enable replication for a VMware VM.
+
+## Get started
+
+VMware to Azure replication includes the following procedures:
+
+- Sign in to the [Azure portal](https://portal.azure.com/).
+- Prepare an Azure account.
+- Prepare an account on the vCenter server or vSphere ESXi host, to automate VM discovery.
+- [Create a recovery Services vault](./quickstart-create-vault-template.md?tabs=CLI)
+- Prepare infrastructure - [deploy an Azure Site Recovery replication appliance](deploy-vmware-azure-replication-appliance-modernized.md)
+- Enable replication
+
+## Prepare Azure account
+
+To create and register the Azure Site Recovery replication appliance, you need an Azure account with:
+
+- Contributor or Owner permissions on the Azure subscription.
+- Permissions to register Azure Active Directory (AAD) apps.
+- Owner or Contributor and User Access Administrator permissions on the Azure subscription to create a Key Vault, used during agentless VMware migration.
+
+If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner for the required permissions.
+
+Use the following steps to assign the required permissions:
+
+1. In the Azure portal, search for **Subscriptions**, and under **Services**, select **Subscriptions** search box to search for the required Azure subscription.
+
+2. In the **Subscriptions page**, select the subscription in which you created the Recovery Services vault.
+
+3. In the subscription, select **Access control** (IAM) > **Check access**. In **Check access**, search for the relevant user account.
+
+4. In **Add a role assignment**, Select **Add,** select the Contributor or Owner role, and select the account. Then Select **Save**.
+
+5. To register the Azure Site Recovery replication appliance, your Azure account needs permissions to register the Azure Active Directory apps.
+
+**Follow these steps to assign required permissions**:
+
+1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**. In **User settings**, verify that Azure AD users can register applications (set to *Yes* by default).
+
+2. In case the **App registrations** settings is set to *No*, request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the Application Developer role to an account to allow the registration of Azure Active Directory App.
+
+## Prepare an account for automatic discovery
+
+Site Recovery needs access to VMware servers to:
+
+- Automatically discover VMs. At least a read-only account is required.
+- Orchestrate replication, failover, and failback. You need an account that can run operations such
+ as creating and removing disks, and powering on VMs.
+
+Create the account as follows:
+
+1. To use a dedicated account, create a role at the vCenter level. Give the role a name such as
+ **Azure_Site_Recovery**.
+2. Assign the role the permissions summarized in the table below.
+3. Create a user on the vCenter server or vSphere host. Assign the role to the user.
+
+### VMware account permissions
+
+**Task** | **Role/Permissions** | **Details**
+ | |
+**VM discovery** | At least a read-only user<br/><br/> Data Center object -> Propagate to Child Object, role=Read-only | User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs and networks).
+**Full replication, failover, failback** | Create a role (Azure_Site_Recovery) with the required permissions, and then assign the role to a VMware user or group<br/><br/> Data Center object ΓÇô> Propagate to Child Object, role=Azure_Site_Recovery<br/><br/> Datastore -> Allocate space, browse datastore, low-level file operations, remove file, update virtual machine files<br/><br/> Network -> Network assign<br/><br/> Resource -> Assign VM to resource pool, migrate powered off VM, migrate powered on VM<br/><br/> Tasks -> Create task, update task<br/><br/> Virtual machine -> Configuration<br/><br/> Virtual machine -> Interact -> answer question, device connection, configure CD media, configure floppy media, power off, power on, VMware tools install<br/><br/> Virtual machine -> Inventory -> Create, register, unregister<br/><br/> Virtual machine -> Provisioning -> Allow virtual machine download, allow virtual machine files upload<br/><br/> Virtual machine -> Snapshots -> Remove snapshots, Create snapshot, Revert snapshot.| User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs and networks).
+
+## Prepare infrastructure - set up Azure Site Recovery Replication appliance
+
+You need to [set up an Azure Site Recovery replication appliance on the on-premises environment](deploy-vmware-azure-replication-appliance-modernized.md) to channel mobility agent communications.
+
+![Replication appliance](./media/vmware-azure-set-up-replication-tutorial-modernized/replication-appliance.png)
+
+## Enable replication of VMware VMs
+
+After an Azure Site Recovery replication appliance is added to a vault, you can get started with protecting the machines.
+
+Ensure the [pre-requisites](vmware-physical-azure-support-matrix.md) across storage and networking are met.
+
+Follow these steps to enable replication:
+
+1. Select **Site Recovery** under **Getting Started** section. Click **Enable Replication (Modernized)** under the VMware section.
+
+2. Choose the machine type you want to protect through Azure Site Recovery.
+
+ > [!NOTE]
+ > In Modernized, the support is limited to virtual machines.
+
+ ![Select source machines](./media/vmware-azure-set-up-replication-tutorial-modernized/select-source.png)
+
+3. After choosing the machine type, select the vCenter server added to Azure Site Recovery replication appliance, registered in this vault.
+
+4. Search the source machine name to protect it. To review the selected machines, select **Selected resources**.
+
+5. After you select the list of VMs, select **Next** to proceed to source settings. Here, select the replication appliance and VM credentials. These credentials will be used to push mobility agent on the machine by Azure Site Recovery replication appliance to complete enabling Azure Site Recovery. Ensure accurate credentials are chosen.
+
+ >[!NOTE]
+ >For Linux OS, ensure to provide the root credentials. For Windows OS, a user account with admin privileges should be added. These credentials will be used to push Mobility Service on to the source machine during enable replication operation.
+
+ ![Source settings](./media/vmware-azure-set-up-replication-tutorial-modernized/source-settings.png)
+
+6. Select **Next** to provide target region properties. By default, Vault subscription and Vault resource group are selected. You can choose a subscription and resource group of your choice. Your source machines will be deployed in this subscription and resource group when you failover in the future.
+
+ ![Target properties](./media/vmware-azure-set-up-replication-tutorial-modernized/target-properties.png)
+
+7. Next, you can select an existing Azure network or create a new target network to be used during failover. If you select **Create new**, you will be redirected to create virtual network context blade and asked to provide address space and subnet details. This network will be created in the target subscription and target resource group selected in the previous step.
+
+8. Then, provide the test failover network details.
+
+ > [!NOTE]
+ > Ensure that the test failover network is different from the failover network. This is to make sure the failover network is readily available in case of an actual disaster.
+
+9. Select the storage.
+
+ - Cache storage account:
+ Now, choose the cache storage account which Azure Site Recovery uses for staging purposes - caching and storing logs before writing the changes on to the managed disks.
+
+ By default, a new LRS v1 type storage account will be created by Azure Site Recovery for the first enable replication operation in a vault. For the next operations, the same cache storage account will be re-used.
+ - Managed disks
+
+ By default, Standard HDD managed disks are created in Azure. You can customize the type of Managed disks by Selecting **Customize**. Choose the type of disk based on the business requirement. Ensure [appropriate disk type is chosen](../virtual-machines/disks-types.md#disk-type-comparison) based on the IOPS of the source machine disks. For pricing information, see managed disk pricing document [here](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+ >[!NOTE]
+ > If Mobility Service is installed manually before enabling replication, you can change the type of managed disk, at a disk level. Else, by default, one managed disk type can be chosen at a machine level
+
+10. Create a new replication policy if needed.
+
+ A default replication policy gets created under the vault with 3 days recovery point retention and app-consistent recovery points disabled by default. You can create a new replication policy or modify the existing one as per your RPO requirements.
+
+ - Select **Create new**.
+
+ - Enter the **Name**.
+
+ - Enter a value for **Retention period (in days)**. You can enter any value ranging from 0 to 15.
+
+ - **Enable app consistency frequency** if you wish and enter a value for **App-consistent snapshot frequency (in hours)** as per business requirements.
+
+ - Select **OK** to save the policy.
+
+ The policy will be created and can be used for protecting the chosen source machines.
+
+11. After choosing the replication policy, select **Next**. Review the Source and Target properties. Select **Enable Replication** to initiate the operation.
+
+ ![Site recovery](./media/vmware-azure-set-up-replication-tutorial-modernized/enable-replication.png)
+
+ A job is created to enable replication of the selected machines. To track the progress, navigate to Site Recovery jobs in the recovery services vault.
++
+## Next steps
+After enabling replication, run a drill to make sure everything's working as expected.
+> [!div class="nextstepaction"]
+> [Run a disaster recovery drill](site-recovery-test-failover-to-azure.md)
site-recovery Vmware Azure Tutorial Failover Failback Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial-failover-failback-modernized.md
+
+ Title: Fail over VMware VMs to Azure with Site Recovery - Modernized
+description: Learn how to fail over VMware VMs to Azure in Azure Site Recovery - Modernized
++ Last updated : 08/19/2021++
+# Fail over VMware VMs - Modernized
+
+This article describes how to fail over an on-premises VMware virtual machine (VM) to Azure with [Azure Site Recovery](site-recovery-overview.md) - Modernized.
+
+For information about failover in Classic releases, see [this article](vmware-azure-tutorial-failover-failback.md).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Verify that the VMware VM properties conform with Azure requirements.
+> * Fail over specific VMs to Azure.
+
+> [!NOTE]
+> Tutorials show you the simplest deployment path for a scenario. They use default options where possible and don't show all possible settings and paths. If you want to learn about failover in detail, see [Fail over VMs and physical servers](site-recovery-failover.md).
+
+[Learn about](failover-failback-overview.md#types-of-failover) different types of failover. If you want to fail over multiple VMs in a recovery plan, review [this article](site-recovery-failover.md).
+
+## Before you start
+
+Complete the previous tutorials:
+
+1. Make sure you've [set up Azure](tutorial-prepare-azure.md) for on-premises disaster recovery of VMware VMs.
+2. Prepare your on-premises [VMware](vmware-azure-tutorial-prepare-on-premises.md) environment for disaster recovery.
+3. Set up disaster recovery for [VMware VMs](vmware-azure-set-up-replication-tutorial-modernized.md).
+4. Run a [disaster recovery drill](tutorial-dr-drill-azure.md) to make sure that everything's working as expected.
+
+## Verify VM properties
+
+Before you run a failover, check the VM properties to make sure that the VMs meet [Azure requirements](vmware-physical-azure-support-matrix.md#replicated-machines).
+
+Verify properties as follows:
+
+1. In **Protected Items**, select **Replicated Items**, and then select the VM you want to verify.
+
+2. In the **Replicated item** pane, there's a summary of VM information, health status, and the
+ latest available recovery points. Select **Properties** to view more details.
+
+3. In **Compute and Network**, you can modify these properties as needed:
+ * Azure name
+ * Resource group
+ * Target size
+ * Managed disk settings
+
+4. You can view and modify network settings, including:
+
+ * The network and subnet in which the Azure VM will be located after failover.
+ * The IP address that will be assigned to it.
+
+5. In **Disks**, you can see information about the operating system and data disks on the VM.
+
+## Run a failover to Azure
+
+1. In **Settings** > **Replicated items**, select the VM you want to fail over, and then select **Failover**.
+2. In **Failover**, select a **Recovery Point** to fail over to. You can use one of the following options:
+ * **Latest**: This option first processes all the data sent to Site Recovery. It provides the lowest Recovery Point Objective (RPO) because the Azure VM that's created after failover has all the data that was replicated to Site Recovery when the failover was triggered.
+ * **Latest processed**: This option fails the VM over to the latest recovery point processed by Site Recovery. This option provides a low RTO (Recovery Time Objective) because no time is spent processing unprocessed data.
+ * **Latest app-consistent**: This option fails the VM over to the latest app-consistent recovery point processed by Site Recovery.
+ * **Custom**: This option lets you specify a recovery point.
+
+3. Select **Shut down machine before beginning failover** to attempt to shut down source VMs before triggering the failover. Failover continues even if the shutdown fails. You can follow the failover progress on the **Jobs** page.
+
+ In some scenarios, failover requires additional processing that takes around 8 to 10 minutes to complete. You might notice longer test failover times for:
+
+ * VMware Linux VMs.
+ * VMware VMs that don't have the DHCP service enabled.
+ * VMware VMs that don't have the following boot drivers: storvsc, vmbus, storflt, intelide, atapi.
+
+ > [!WARNING]
+ > Don't cancel a failover in progress. Before failover is started, VM replication is stopped. If you cancel a failover in progress, failover stops, but the VM won't replicate again.
+
+## Connect to failed-over VM
+
+1. If you want to connect to Azure VMs after failover by using Remote Desktop Protocol (RDP) and Secure Shell (SSH), [verify that the requirements have been met](failover-failback-overview.md#connect-to-azure-after-failover).
+2. After failover, go to the VM and validate by [connecting](../virtual-machines/windows/connect-logon.md) to it.
+3. Use **Change recovery point** if you want to use a different recovery point after failover. After you commit the failover in the next step, this option will no longer be available.
+4. After validation, select **Commit** to finalize the recovery point of the VM after failover.
+5. After you commit, all the other available recovery points are deleted. This step completes the failover.
+
+>[!TIP]
+> If you encounter any connectivity issues after failover, follow the [troubleshooting guide](site-recovery-failover-to-azure-troubleshoot.md).
+
+## Planned failover from Azure to on-premises
+
+You can perform a planned failover from Azure to on-premises. Since it is a planned failover activity, the recovery point is generated after the planned failover job is triggered.
+
+>[!NOTE]
+> Before proceeding, ensure that the replication health of the machine is healthy. Also ensure that the appliance and all its components are healthy too.
+
+When the planned failover is triggered, pending changes are copied to on-premises, a latest recovery point of the VM is generated and Azure VM is shut down. Post this, on-premises machine is turned on.
+
+After a successful planned failover, the machine will be active in your on-premises environment.
+
+> [!NOTE]
+> If protected machine has iSCSI disks, the configuration is retained in Azure upon failover. After planned failover from Azure to on-premises, the iSCSI configuration cannot be retained. So, vmdk disks are created on the on-premises machine. To remove duplicate disks, delete the iSCSI disk as the data is replaced with vmdk disks.
++
+### Failed over VM to Azure - requirements
+
+Ensure the following for the VM, after it is failed over to Azure:
+
+1. The VM in Azure should always be switched on.
+2. Ensure mobility agent services *service 1* and *service 2* are running on the VM. This is to ensure mobility agent in the VM can communicate with Azure Site Recovery services in Azure.
+3. The URLs mentioned [here](vmware-azure-architecture-modernized.md#set-up-outbound-network-connectivity) are accessible from the VM.
+
+## Cancel planned failover
+
+If your on-premises environment is not ready or in case of any challenges, you can cancel the planned failover
+You can perform a planned failover any time later, once your on-premises conditions turn favorable.
+
+**To cancel a planned failover**:
+
+1. Navigate to the machine in recovery services vault and select **Cancel Failover**.
+2. Click **OK**.
+3. Ensure that you read the information about how the *cancel failover* operation proceeds.
+
+If there are any issues preventing Azure Site Recovery from successfully canceling the failed job, follow the recommended steps provided in the job. After following the recommended action, retry the cancel job.
+
+The previous planned failover operation will be canceled. The machine in Azure will be returned to the state just before *planned failover* was triggered.
+
+For planned failover, after we detach the VM disks from the appliance, we take its snapshot before powering on.
+
+If the VM does not boot properly or some application does not come up properly, or for some reason you decide to cancel the planned failover and try again, then:
+
+1. We would revert all the changes made.
+
+2. Bring back the disks to the same state as they were before powering, by using the snapshots taken earlier.
+
+3. Finally, attach the disks back to the appliance and resume the replication.
+
+This behavior is different from what was present in the Classic architecture.
+
+- In Modernized architecture, you can do the failback operation again at a later point of time.
+
+- In Classic architecture, you cannot cancel and retry the failback - if the VM does not boot up or the application does not come up or for any other reason.
++
+> [!NOTE]
+> Only planned failover from Azure to on-premises can be canceled. Failover from on-premises to Azure cannot be canceled.
+
+### Planned failover - failure
+
+If the planned failover fails, Azure Site Recovery automatically initiates a job to cancel the failed job and retrieves the state of the machine that was just before the planned failover.
+
+In case cancellation of last planned failover job fails, Azure Site Recovery prompts you to initiate the cancellation manually.
+
+This information is provided as part of failed planned failover operation and as a health issue of the replicated item.
+
+If issue persists, contact Microsoft support. **Do not** disable replication.
+
+## Re-protect the on-premises machine to Azure after successful planned failover
+
+After successful planned failover, the machine is active in your on-premises. To protect your machine in the future, ensure that the machine is replicated to Azure (re-protected).
+
+To do this, go to the machine > **Re-protect**, select the appliance of your choice, select the replication policy and proceed.
+
+After successfully enabling replication and initial replication, recovery points will be generated to offer business continuity from unwanted disruptions.
+
+## Next steps
+
+After failover, reprotect the Azure VMs to on-premises. After the VMs are reprotected and replicating to the on-premises site, fail back from Azure when you're ready.
+
+> [!div class="nextstepaction"]
+> [Reprotect Azure VMs](vmware-azure-reprotect.md)
+> [Fail back from Azure](vmware-azure-failback.md)
site-recovery Vmware Azure Tutorial Failover Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial-failover-failback.md
This article describes how to fail over an on-premises VMware virtual machine (VM) to Azure with [Azure Site Recovery](site-recovery-overview.md) - Classic.
-For information about failover in preview release, [see this article](vmware-azure-tutorial-failover-failback-preview.md).
+For information about failover in modernized release, [see this article](vmware-azure-tutorial-failover-failback-modernized.md).
In this tutorial, you learn how to:
site-recovery Vmware Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial.md
This article describes how to enable replication for on-premises VMware VMs, for disaster recovery to Azure using the [Azure Site Recovery](site-recovery-overview.md) service - Classic.
-For information about disaster recovery in Azure Site Recovery Preview, see [this article](vmware-azure-set-up-replication-tutorial-preview.md)
+For information about disaster recovery in Azure Site Recovery Modernized, see [this article](vmware-azure-set-up-replication-tutorial-modernized.md)
This is the third tutorial in a series that shows how to set up disaster recovery to Azure for on-premises VMware VMs. In the previous tutorial, we [prepared the on-premises VMware environment](vmware-azure-tutorial-prepare-on-premises.md) for disaster recovery to Azure.
site-recovery Vmware Physical Azure Config Process Server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-config-process-server-overview.md
Last updated 08/19/2021
This article describes the configuration, process, and master target servers used by the [Site Recovery](site-recovery-overview.md) service to replicate VMware VMs and physical servers to Azure. This article is applicable to Classic releases.
-In Preview, to replicate VMs, you need to create and use an Azure Site Recovery replication server. For information about Azure Site Recovery replication server and its components, see [this article](vmware-azure-architecture-preview.md).
+In modernized architecture, to replicate VMs, you need to create and use an Azure Site Recovery replication server. For information about Azure Site Recovery replication server and its components, see [this article](vmware-azure-architecture-modernized.md).
## Configuration server
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Title: Support matrix for VMware/physical disaster recovery in Azure Site Recovery. description: Summarizes support for disaster recovery of VMware VMs and physical server to Azure using Azure Site Recovery. Previously updated : 05/02/2022 Last updated : 09/21/2022 # Support matrix for disaster recovery of VMware VMs and physical servers to Azure
This article summarizes supported components and settings for disaster recovery
**Scenario** | **Details** | Disaster recovery of VMware VMs | Replication of on-premises VMware VMs to Azure. You can deploy this scenario in the Azure portal or by using [PowerShell](vmware-azure-disaster-recovery-powershell.md).
-Disaster recovery of physical servers | Replication of on-premises Windows/Linux physical servers to Azure. You can deploy this scenario in the Azure portal. <br></br>(Not supported for Preview architecture)
+Disaster recovery of physical servers | Replication of on-premises Windows/Linux physical servers to Azure. You can deploy this scenario in the Azure portal.
## On-premises virtualization servers
IP address | Make sure that configuration server and process server have a stati
## Replicated machines
-In preview, replication is done by the Azure Site Recovery replication appliance. For detailed information about replication appliance, see [this article](deploy-vmware-azure-replication-appliance-preview.md).
+In Modernized, replication is done by the Azure Site Recovery replication appliance. For detailed information about replication appliance, see [this article](deploy-vmware-azure-replication-appliance-modernized.md).
Site Recovery supports replication of any workload running on a supported machine.
BTRFS | BTRFS is supported from [Update Rollup 34](https://support.microsoft.com
**Action** | **Details** |
-Resize disk on replicated VM (Not supported for Preview architecture)| Resizing up on the source VM is supported. Resizing down on the source VM is not supported. Resizing should be performed before failover, directly in the VM properties. No need to disable/re-enable replication.<br/><br/> If you change the source VM after failover, the changes aren't captures.<br/><br/> If you change the disk size on the Azure VM after failover, when you fail back, Site Recovery creates a new VM with the updates.
+Resize disk on replicated VM | Resizing up on the source VM is supported. Resizing down on the source VM is not supported. Resizing should be performed before failover, directly in the VM properties. No need to disable/re-enable replication.<br/><br/> If you change the source VM after failover, the changes aren't captures.<br/><br/> If you change the disk size on the Azure VM after failover, when you fail back, Site Recovery creates a new VM with the updates.
Add disk on replicated VM | Not supported.<br/> Disable replication for the VM, add the disk, and then re-enable replication. > [!NOTE]
-> Any change to disk identity is not supported. For example, if the disk partitioning has been changed from GPT to MBR or vice versa, then this will change the disk identity. In such a scenario, the replication will break and a fresh setup will be required.
-> For Linux machines, device name change is not supported as it has an impact on the disk identity.
-> In preview, resizing the disk size to reduce it from its original size, is not supported.
+> - Any change to disk identity is not supported. For example, if the disk partitioning has been changed from GPT to MBR or vice versa, then this will change the disk identity. In such a scenario, the replication will break and a fresh setup will be required.
+> - For Linux machines, device name change is not supported as it has an impact on the disk identity.
+> - In Modernized, resizing the disk size to reduce it from its original size, is not supported.
## Network **Component** | **Supported** |
-Host network NIC Teaming | Supported for VMware VMs. <br/><br/>Not supported for physical machine replication.
+Host network NIC Teaming | Supported for VMware VMs and physical machine replication.
Host network VLAN | Yes. Host network IPv4 | Yes. Host network IPv6 | No.
Guest/server network IPv6 | No.
Guest/server network static IP (Windows) | Yes. Guest/server network static IP (Linux) | Yes. <br/><br/>VMs are configured to use DHCP on failback. Guest/server network multiple NICs | Yes.
-Private link access to Site Recovery service | Yes. [Learn more](hybrid-how-to-enable-replication-private-endpoints.md). (Not supported for Preview architecture)
+Private link access to Site Recovery service | Yes. [Learn more](hybrid-how-to-enable-replication-private-endpoints.md).
## Azure VM network (after failover)
Soft delete | Not supported.
**Feature** | **Supported** |
-Availability sets | Yes. (Not supported for Preview architecture)
+Availability sets | Yes. Not supported for modernized experience.
Availability zones | No HUB | Yes Managed disks | Yes ## Azure VM requirements
-On-premises VMs replicated to Azure must meet the Azure VM requirements summarized in this table. When Site Recovery runs a prerequisites check for replication, the check will fail if some of the requirements aren't met.
+On-premises VMs replicated to Azure must meet the Azure VM requirements summarized in this table. When Site Recovery runs prerequisites check for replication, the check will fail if some of the requirements aren't met.
**Component** | **Requirements** | **Details** | |
Guest operating system architecture | 64-bit. | Check fails if unsupported.
Operating system disk size | Up to 2,048 GB for Generation 1 machines. <br> Up to 4,095 GB for Generation 2 machines. | Check fails if unsupported. Operating system disk count | 1 </br> boot and system partition on different disks is not supported | Check fails if unsupported. Data disk count | 64 or less. | Check fails if unsupported.
-Data disk size | Up to 32 TB when replicating to managed disk (9.41 version onwards)<br> Up to 4 TB when replicating to storage account </br> Each premium storage account can host up to 35 TB of data </br> Minimum disk size requirement - at least 1 GB
+Data disk size | Up to 32 TB when replicating to managed disk (9.41 version onwards)<br> Up to 4 TB when replicating to storage account </br> Each premium storage account can host up to 35 TB of data </br> Minimum disk size requirement - at least 1 GB | Check fails if unsupported.
RAM | Site Recovery driver consumes 6% of RAM. Network adapters | Multiple adapters are supported. | Shared VHD | Not supported. | Check fails if unsupported.
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Previously updated : 04/28/2022 Last updated : 09/21/2022 # About the Mobility service for VMware VMs and physical servers
During a push installation of the Mobility service, the following steps are perf
## Install the Mobility service using UI (Classic) >[!NOTE]
-> This section is applicable to Azure Site Recovery - Classic. [Here are the Installation instructions for preview](#install-the-mobility-service-using-ui-preview)
+> This section is applicable to Azure Site Recovery - Classic. [Here are the Installation instructions for Modernized](#install-the-mobility-service-using-ui-modernized)
### Prerequisites - Ensure that all server configurations meet the criteria in the [Support matrix for disaster recovery of VMware VMs and physical servers to Azure](vmware-physical-azure-support-matrix.md).
During a push installation of the Mobility service, the following steps are perf
## Install the Mobility service using command prompt (Classic) >[!NOTE]
-> This section is applicable to Azure Site Recovery - Classic. [Here are the installation instructions for preview](#install-the-mobility-service-using-command-prompt-preview).
+> This section is applicable to Azure Site Recovery - Classic. [Here are the installation instructions for Modernized](#install-the-mobility-service-using-command-prompt-modernized).
### Prerequisites
As a **prerequisite to update or protect Ubuntu 14.04 machines** from 9.42 versi
1. C:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository
-## Install the Mobility service using UI (preview)
+## Install the Mobility service using UI (Modernized)
>[!NOTE]
-> This section is applicable to Azure Site Recovery - Preview. [Here are the installation instructions for Classic](#install-the-mobility-service-using-ui-classic).
+> This section is applicable to Azure Site Recovery - Modernized. [Here are the installation instructions for Classic](#install-the-mobility-service-using-ui-classic).
### Prerequisites
Locate the installer files for the serverΓÇÖs operating system using the followi
Wait till the installation has been completed. Once done, you will reach the registration step, you can register the source machine with the appliance of your choice.
- ![Image showing Install UI option for Mobility Service](./media/vmware-physical-mobility-service-overview-preview/mobility-service-install.png)
+ ![Image showing Install UI option for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/mobility-service-install.png)
- ![Image showing Installation progress for Mobility Service](./media/vmware-physical-mobility-service-overview-preview/installation-progress.png)
+ ![Image showing Installation progress for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/installation-progress.png)
5. Copy the string present in the field **Machine Details**. This field includes information unique to the source machine. This information is required to [generate the Mobility Service configuration file](#generate-mobility-service-configuration-file).
- ![Image showing source machine string](./media/vmware-physical-mobility-service-overview-preview/source-machine-string.png)
+ ![Image showing source machine string](./media/vmware-physical-mobility-service-overview-modernized/source-machine-string.png)
6. Provide the path of **Mobility Service configuration file** in the Unified Agent configurator. 7. Click **Register**. This will successfully register your source machine with your appliance.
-## Install the Mobility service using command prompt (preview)
+## Install the Mobility service using command prompt (Modernized)
>[!NOTE]
-> This section is applicable to Azure Site Recovery - Preview. [Here are the installation instructions for Classic](#install-the-mobility-service-using-command-prompt-classic).
+> This section is applicable to Azure Site Recovery - Modernized. [Here are the installation instructions for Classic](#install-the-mobility-service-using-command-prompt-classic).
### Windows machine 1. Open command prompt and navigate to the folder where the installer file has been placed.
Locate the installer files for the serverΓÇÖs operating system using the followi
``` Once the installation is complete, copy the string that is generated alongside the parameter *Agent Config Input*. This string is required to [generate the Mobility Service configuration file](#generate-mobility-service-configuration-file).
- ![sample string for downloading configuration flle ](./media/vmware-physical-mobility-service-overview-preview/configuration-string.png)
+ ![sample string for downloading configuration flle ](./media/vmware-physical-mobility-service-overview-modernized/configuration-string.png)
4. After successfully installing, register the source machine with the above appliance using the following command:
Syntax | `.\UnifiedAgentInstaller.exe /Platform vmware /Role MS /CSType CSPrime
`/InstallLocation`| Optional. Specifies the Mobility service installation location (any folder). `/Platform` | Mandatory. Specifies the platform on which the Mobility service is installed: <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs.<br/><br/> If you're treating Azure VMs as physical machines, specify **VMware**. `/Silent`| Optional. Specifies whether to run the installer in silent mode.
-`/CSType`| Mandatory. Used to define preview or legacy architecture. (CSPrime or CSLegacy)
+`/CSType`| Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy)
#### Registration settings
Setting | Details
| Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime >` `/SourceConfigFilePath` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder.
-`/CSType` | Mandatory. Used to define preview or legacy architecture. (CSPrime or CSLegacy).
+`/CSType` | Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy).
### Linux machine
Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath
`-d` | Optional. Specifies the Mobility service installation location: `/usr/local/ASR`. `-v` | Mandatory. Specifies the platform on which Mobility service is installed. <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs. `-q` | Optional. Specifies whether to run the installer in silent mode.
- `-c` | Mandatory. Used to define preview or legacy architecture. (CSPrime or CSLegacy).
+ `-c` | Mandatory. Used to define modernized or legacy architecture. (CSPrime or CSLegacy).
#### Registration settings
Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath
| Syntax | `<InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -c CSPrime -S config.json -q` `-S` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder.
- `-c` | Mandatory. Used to define preview or legacy architecture. (CSPrime or CSLegacy).
+ `-c` | Mandatory. Used to define modernized and legacy architecture. (CSPrime or CSLegacy).
`-q` | Optional. Specifies whether to run the installer in silent mode. ## Generate Mobility Service configuration file
Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath
2. Paste the Machine Details string that you copied from Mobility Service and paste it in the input field here. 3. Click **Download configuration file**.
- ![Image showing download configuration file option for Mobility Service](./media/vmware-physical-mobility-service-overview-preview/download-configuration-file.png)
+ ![Image showing download configuration file option for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/download-configuration-file.png)
This downloads the Mobility Service configuration file. Copy the downloaded file to a local folder in your source machine. You can place it in the same folder as the Mobility Service installer.
-See information about [upgrading the mobility services](upgrade-mobility-service-preview.md).
+See information about [upgrading the mobility services](upgrade-mobility-service-modernized.md).
## Next steps
-[Set up push installation for the Mobility service](vmware-azure-install-mobility-service.md).
+> [!div class="nextstepaction"]
+> [Set up push installation for the Mobility service](vmware-azure-install-mobility-service.md).
spring-apps How To Deploy In Azure Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-in-azure-virtual-network.md
Select the virtual network **azure-spring-apps-vnet** you previously created.
![Screenshot that shows the Access control screen.](./media/spring-cloud-v-net-injection/access-control.png)
-1. Assign the *Owner* role to the **Azure Spring Apps Resource Provider**. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md#step-2-open-the-add-role-assignment-page).
+1. Assign the *Owner* role to the **Azure Spring Cloud Resource Provider**. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md#step-2-open-the-add-role-assignment-page).
![Screenshot that shows owner assignment to resource provider.](./media/spring-cloud-v-net-injection/assign-owner-resource-provider.png)
spring-apps How To Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-manage-user-assigned-managed-identities.md
Title: Manage user-assigned managed identities for an application in Azure Spring Apps (preview)
+ Title: Manage user-assigned managed identities for an application in Azure Spring Apps
description: How to manage user-assigned managed identities for applications.
zone_pivot_groups: spring-apps-tier-selection
-# Manage user-assigned managed identities for an application in Azure Spring Apps (preview)
+# Manage user-assigned managed identities for an application in Azure Spring Apps
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
spring-apps How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-managed-identities.md
Managed identities for Azure resources provide an automatically managed identity
| System-assigned | User-assigned | | - | - |
-| GA | Preview |
+| GA | GA |
## Manage managed identity for an application
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
Title: Hot, Cool, and Archive access tiers for blob data
+ Title: Hot, cool, and archive access tiers for blob data
-description: Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Learn about the Hot, Cool, and Archive access tiers for Blob Storage.
+description: Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Learn about the hot, cool, and archive access tiers for Blob Storage.
Previously updated : 07/13/2022 Last updated : 09/23/2022
-# Hot, Cool, and Archive access tiers for blob data
+# Hot, cool, and archive access tiers for blob data
Data stored in the cloud grows at an exponential pace. To manage costs for your expanding storage needs, it can be helpful to organize your data based on how frequently it will be accessed and how long it will be retained. Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Azure Storage access tiers include: -- **Hot tier** - An online tier optimized for storing data that is accessed or modified frequently. The Hot tier has the highest storage costs, but the lowest access costs.-- **Cool tier** - An online tier optimized for storing data that is infrequently accessed or modified. Data in the Cool tier should be stored for a minimum of 30 days. The Cool tier has lower storage costs and higher access costs compared to the Hot tier.-- **Archive tier** - An offline tier optimized for storing data that is rarely accessed, and that has flexible latency requirements, on the order of hours. Data in the Archive tier should be stored for a minimum of 180 days.
+- **Hot tier** - An online tier optimized for storing data that is accessed or modified frequently. The hot tier has the highest storage costs, but the lowest access costs.
+- **Cool tier** - An online tier optimized for storing data that is infrequently accessed or modified. Data in the cool tier should be stored for a minimum of 30 days. The cool tier has lower storage costs and higher access costs compared to the hot tier.
+- **Archive tier** - An offline tier optimized for storing data that is rarely accessed, and that has flexible latency requirements, on the order of hours. Data in the archive tier should be stored for a minimum of 180 days.
Azure storage capacity limits are set at the account level, rather than according to access tier. You can choose to maximize your capacity usage in one tier, or to distribute capacity across two or more tiers.
Azure storage capacity limits are set at the account level, rather than accordin
## Online access tiers
-When your data is stored in an online access tier (either Hot or Cool), users can access it immediately. The Hot tier is the best choice for data that is in active use, while the Cool tier is ideal for data that is accessed less frequently, but that still must be available for reading and writing.
+When your data is stored in an online access tier (either hot or cool), users can access it immediately. The hot tier is the best choice for data that is in active use. The cool tier is ideal for data that is accessed less frequently, but that still must be available for reading and writing.
-Example usage scenarios for the Hot tier include:
+Example usage scenarios for the hot tier include:
-- Data that's in active use or is expected to be read from and written to frequently.-- Data that's staged for processing and eventual migration to the Cool access tier.
+- Data that's in active use or data that you expect will require frequent reads and writes.
+- Data that's staged for processing and eventual migration to the cool access tier.
-Usage scenarios for the Cool access tier include:
+Usage scenarios for the cool access tier include:
- Short-term data backup and disaster recovery. - Older data sets that aren't used frequently, but are expected to be available for immediate access.-- Large data sets that need to be stored in a cost-effective way while additional data is being gathered for processing.
+- Large data sets that need to be stored in a cost-effective way while other data is being gathered for processing.
-To learn how to move a blob to the Hot or Cool tier, see [Set a blob's access tier](access-tiers-online-manage.md).
+To learn how to move a blob to the hot or cool tier, see [Set a blob's access tier](access-tiers-online-manage.md).
-Data in the Cool tier has slightly lower availability, but offers the same high durability, retrieval latency, and throughput characteristics as the Hot tier. For data in the Cool tier, slightly lower availability and higher access costs may be acceptable trade-offs for lower overall storage costs, as compared to the Hot tier. For more information, see [SLA for storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
+Data in the cool tier has slightly lower availability, but offers the same high durability, retrieval latency, and throughput characteristics as the hot tier. For data in the cool tier, slightly lower availability and higher access costs may be acceptable trade-offs for lower overall storage costs, as compared to the hot tier. For more information, see [SLA for storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/).
-A blob in the Cool tier in a general-purpose v2 account is subject to an early deletion penalty if it's deleted or moved to a different tier before 30 days has elapsed. This charge is prorated. For example, if a blob is moved to the Cool tier and then deleted after 21 days, you'll be charged an early deletion fee equivalent to 9 (30 minus 21) days of storing that blob in the Cool tier.
+A blob in the cool tier in a general-purpose v2 account is subject to an early deletion penalty if it's deleted or moved to a different tier before 30 days has elapsed. This charge is prorated. For example, if a blob is moved to the cool tier and then deleted after 21 days, you'll be charged an early deletion fee equivalent to 9 (30 minus 21) days of storing that blob in the cool tier.
-The Hot and Cool tiers support all redundancy configurations. For more information about data redundancy options in Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
+The hot and cool tiers support all redundancy configurations. For more information about data redundancy options in Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
## Archive access tier
-The Archive tier is an offline tier for storing data that is rarely accessed. The Archive access tier has the lowest storage cost, but higher data retrieval costs and latency compared to the Hot and Cool tiers. Example usage scenarios for the Archive access tier include:
+The archive tier is an offline tier for storing data that is rarely accessed. The archive access tier has the lowest storage cost. However, this tier has higher data retrieval costs with a higher latency as compared to the hot and cool tiers. Example usage scenarios for the archive access tier include:
- Long-term backup, secondary backup, and archival datasets - Original (raw) data that must be preserved, even after it has been processed into final usable form - Compliance and archival data that needs to be stored for a long time and is hardly ever accessed
-To learn how to move a blob to the Archive tier, see [Archive a blob](archive-blob.md).
+To learn how to move a blob to the archive tier, see [Archive a blob](archive-blob.md).
-Data must remain in the Archive tier for at least 180 days or be subject to an early deletion charge. For example, if a blob is moved to the Archive tier and then deleted or moved to the Hot tier after 45 days, you'll be charged an early deletion fee equivalent to 135 (180 minus 45) days of storing that blob in the Archive tier.
+Data must remain in the archive tier for at least 180 days or be subject to an early deletion charge. For example, if a blob is moved to the archive tier and then deleted or moved to the hot tier after 45 days, you'll be charged an early deletion fee equivalent to 135 (180 minus 45) days of storing that blob in the archive tier.
-While a blob is in the Archive tier, it can't be read or modified. To read or download a blob in the Archive tier, you must first rehydrate it to an online tier, either Hot or Cool. Data in the Archive tier can take up to 15 hours to rehydrate, depending on the priority you specify for the rehydration operation. For more information about blob rehydration, see [Overview of blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+While a blob is in the archive tier, it can't be read or modified. To read or download a blob in the archive tier, you must first rehydrate it to an online tier, either hot or cool. Data in the archive tier can take up to 15 hours to rehydrate, depending on the priority you specify for the rehydration operation. For more information about blob rehydration, see [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md).
-An archived blob's metadata remains available for read access, so that you can list the blob and its properties, metadata, and index tags. Metadata for a blob in the Archive tier is read-only, while blob index tags can be read or written. Snapshots aren't supported for archived blobs.
+An archived blob's metadata remains available for read access, so that you can list the blob and its properties, metadata, and index tags. Metadata for a blob in the archive tier is read-only, while blob index tags can be read or written. Snapshots aren't supported for archived blobs.
-The following operations are supported for blobs in the Archive tier:
+The following operations are supported for blobs in the archive tier:
- [Copy Blob](/rest/api/storageservices/copy-blob) - [Delete Blob](/rest/api/storageservices/delete-blob)
The following operations are supported for blobs in the Archive tier:
- [Set Blob Tags](/rest/api/storageservices/set-blob-tags) - [Set Blob Tier](/rest/api/storageservices/set-blob-tier)
-Only storage accounts that are configured for LRS, GRS, or RA-GRS support moving blobs to the Archive tier. The Archive tier isn't supported for ZRS, GZRS, or RA-GZRS accounts. For more information about redundancy configurations for Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
+Only storage accounts that are configured for LRS, GRS, or RA-GRS support moving blobs to the archive tier. The archive tier isn't supported for ZRS, GZRS, or RA-GZRS accounts. For more information about redundancy configurations for Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
-To change the redundancy configuration for a storage account that contains blobs in the Archive tier, you must first rehydrate all archived blobs to the Hot or Cool tier. Microsoft recommends that you avoid changing the redundancy configuration for a storage account that contains archived blobs if at all possible, because rehydration operations can be costly and time-consuming.
+To change the redundancy configuration for a storage account that contains blobs in the archive tier, you must first rehydrate all archived blobs to the hot or cool tier. Because rehydration operations can be costly and time-consuming, Microsoft recommends that you avoid changing the redundancy configuration of a storage account that contains archived blobs.
-Migrating a storage account from LRS to GRS is supported as long as no blobs were moved to the Archive tier while the account was configured for LRS. An account can be moved back to GRS if the update is performed less than 30 days from the time the account became LRS, and no blobs were moved to the Archive tier while the account was set to LRS.
+Migrating a storage account from LRS to GRS is supported as long as no blobs were moved to the archive tier while the account was configured for LRS. An account can be moved back to GRS if the update is performed less than 30 days from the time the account became LRS, and no blobs were moved to the archive tier while the account was set to LRS.
## Default account access tier setting
-Storage accounts have a default access tier setting that indicates the online tier in which a new blob is created. The default access tier setting can be set to either Hot or Cool. Users can override the default setting for an individual blob when uploading the blob or changing its tier.
+Storage accounts have a default access tier setting that indicates the online tier in which a new blob is created. The default access tier setting can be set to either hot or cool. Users can override the default setting for an individual blob when uploading the blob or changing its tier.
-The default access tier for a new general-purpose v2 storage account is set to the Hot tier by default. You can change the default access tier setting when you create a storage account or after it's created. If you don't change this setting on the storage account or explicitly set the tier when uploading a blob, then a new blob is uploaded to the Hot tier by default.
+The default access tier for a new general-purpose v2 storage account is set to the hot tier by default. You can change the default access tier setting when you create a storage account or after it's created. If you don't change this setting on the storage account or explicitly set the tier when uploading a blob, then a new blob is uploaded to the hot tier by default.
A blob that doesn't have an explicitly assigned tier infers its tier from the default account access tier setting. If a blob's access tier is inferred from the default account access tier setting, then the Azure portal displays the access tier as **Hot (inferred)** or **Cool (inferred)**.
-Changing the default access tier setting for a storage account applies to all blobs in the account for which an access tier hasn't been explicitly set. If you toggle the default access tier setting from Hot to Cool in a general-purpose v2 account, then you're charged for write operations (per 10,000) for all blobs for which the access tier is inferred. You're charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle from Cool to Hot in a general-purpose v2 account.
+Changing the default access tier setting for a storage account applies to all blobs in the account for which an access tier hasn't been explicitly set. If you toggle the default access tier setting from hot to cool in a general-purpose v2 account, then you're charged for write operations (per 10,000) for all blobs for which the access tier is inferred. You're charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle from cool to hot in a general-purpose v2 account.
-When you create a legacy Blob Storage account, you must specify the default access tier setting as Hot or Cool at create time. There's no charge for changing the default account access tier setting from Hot to Cool in a legacy Blob Storage account. You're charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle from Cool to Hot in a Blob Storage account. Microsoft recommends using general-purpose v2 storage accounts rather than Blob Storage accounts when possible.
+When you create a legacy Blob Storage account, you must specify the default access tier setting as hot or cool at create time. There's no charge for changing the default account access tier setting from hot to cool in a legacy Blob Storage account. You're charged for both read operations (per 10,000) and data retrieval (per GB) if you toggle from cool to hot in a Blob Storage account. Microsoft recommends using general-purpose v2 storage accounts rather than Blob Storage accounts when possible.
> [!NOTE]
-> The Archive tier is not supported as the default access tier for a storage account.
+> The archive tier is not supported as the default access tier for a storage account.
## Setting or changing a blob's tier
To explicitly set a blob's tier when you create it, specify the tier when you up
After a blob is created, you can change its tier in either of the following ways: - By calling the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation, either directly or via a [lifecycle management](#blob-lifecycle-management) policy. Calling [Set Blob Tier](/rest/api/storageservices/set-blob-tier) is typically the best option when you're changing a blob's tier from a hotter tier to a cooler one. -- By calling the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a blob from one tier to another. Calling [Copy Blob](/rest/api/storageservices/copy-blob) is recommended for most scenarios where you're rehydrating a blob from the Archive tier to an online tier, or moving a blob from Cool to Hot. By copying a blob, you can avoid the early deletion penalty, if the required storage interval for the source blob hasn't yet elapsed. However, copying a blob results in capacity charges for two blobs, the source blob and the destination blob.
+- By calling the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a blob from one tier to another. Calling [Copy Blob](/rest/api/storageservices/copy-blob) is recommended for most scenarios where you're rehydrating a blob from the archive tier to an online tier, or moving a blob from cool to hot. By copying a blob, you can avoid the early deletion penalty, if the required storage interval for the source blob hasn't yet elapsed. However, copying a blob results in capacity charges for two blobs, the source blob and the destination blob.
-Changing a blob's tier from Hot to Cool or Archive is instantaneous, as is changing from Cool to Hot. Rehydrating a blob from the Archive tier to either the Hot or Cool tier can take up to 15 hours.
+Changing a blob's tier from hot to cool or archive is instantaneous, as is changing from cool to hot. Rehydrating a blob from the archive tier to either the hot or cool tier can take up to 15 hours.
Keep in mind the following points when changing a blob's tier: -- You cannot call **Set Blob Tier** on a blob that uses an encryption scope. For more information about encryption scopes, see [Encryption scopes for Blob storage](encryption-scope-overview.md).-- If a blob's tier is inferred as Cool based on the storage account's default access tier and the blob is moved to the Archive tier, there's no early deletion charge.-- If a blob is explicitly moved to the Cool tier and then moved to the Archive tier, the early deletion charge applies.
+- You can't call **Set Blob Tier** on a blob that uses an encryption scope. For more information about encryption scopes, see [Encryption scopes for Blob storage](encryption-scope-overview.md).
+- If a blob's tier is inferred as cool based on the storage account's default access tier and the blob is moved to the archive tier, there's no early deletion charge.
+- If a blob is explicitly moved to the cool tier and then moved to the archive tier, the early deletion charge applies.
The following table summarizes the approaches you can take to move blobs between various tiers. | Origin/Destination | Hot tier | Cool tier | Archive tier | |--|--|--|--|
-| **Hot tier** | N/A | Change a blob's tier from Hot to Cool with **Set Blob Tier** or **Copy Blob**. [Learn more...](manage-access-tier.md)<br /><br />Move blobs to the Cool tier with a lifecycle management policy. [Learn more...](lifecycle-management-overview.md) | Change a blob's tier from Hot to Archive with **Set Blob Tier** or **Copy Blob**. [Learn more...](archive-blob.md) <br /><br />Archive blobs with a lifecycle management policy. [Learn more...](lifecycle-management-overview.md) |
-| **Cool tier** | Change a blob's tier from Cool to Hot with **Set Blob Tier** or **Copy Blob**. [Learn more...](manage-access-tier.md) <br /><br />Move blobs to the Hot tier with a lifecycle management policy. [Learn more...](lifecycle-management-overview.md) | N/A | Change a blob's tier from Cool to Archive with **Set Blob Tier** or **Copy Blob**. [Learn more...](archive-blob.md) <br /><br />Archive blobs with a lifecycle management policy. [Learn more...](lifecycle-management-overview.md) |
-| **Archive tier** | Rehydrate to Hot tier with **Set Blob Tier** or **Copy Blob**. [Learn more...](archive-rehydrate-to-online-tier.md) | Rehydrate to Cool tier with **Set Blob Tier** or **Copy Blob**. [Learn more...](archive-rehydrate-to-online-tier.md) | N/A |
+| **Hot tier** | N/A | Change a blob's tier from hot to cool with **Set Blob Tier** or **Copy Blob**. [Learn more...](manage-access-tier.md)<br /><br />Move blobs to the cool tier with a lifecycle management policy. [Learn more...](lifecycle-management-overview.md) | Change a blob's tier from hot to archive with **Set Blob Tier** or **Copy Blob**. [Learn more...](archive-blob.md) <br /><br />Archive blobs with a lifecycle management policy. [Learn more...](lifecycle-management-overview.md) |
+| **Cool tier** | Change a blob's tier from cool to hot with **Set Blob Tier** or **Copy Blob**. [Learn more...](manage-access-tier.md) <br /><br />Move blobs to the hot tier with a lifecycle management policy. [Learn more...](lifecycle-management-overview.md) | N/A | Change a blob's tier from cool to archive with **Set Blob Tier** or **Copy Blob**. [Learn more...](archive-blob.md) <br /><br />Archive blobs with a lifecycle management policy. [Learn more...](lifecycle-management-overview.md) |
+| **Archive tier** | Rehydrate to the hot tier with **Set Blob Tier** or **Copy Blob**. [Learn more...](archive-rehydrate-to-online-tier.md) | Rehydrate to cool tier with **Set Blob Tier** or **Copy Blob**. [Learn more...](archive-rehydrate-to-online-tier.md) | N/A |
## Blob lifecycle management Blob storage lifecycle management offers a rule-based policy that you can use to transition your data to the desired access tier when your specified conditions are met. You can also use lifecycle management to expire data at the end of its life. See [Optimize costs by automating Azure Blob Storage access tiers](./lifecycle-management-overview.md) to learn more. > [!NOTE]
-> Data stored in a premium block blob storage account cannot be tiered to Hot, Cool, or Archive using [Set Blob Tier](/rest/api/storageservices/set-blob-tier) or using Azure Blob Storage lifecycle management. To move data, you must synchronously copy blobs from the block blob storage account to the Hot tier in a different account using the [Put Block From URL API](/rest/api/storageservices/put-block-from-url) or a version of AzCopy that supports this API. The **Put Block From URL** API synchronously copies data on the server, meaning the call completes only once all the data is moved from the original server location to the destination location.
+> Data stored in a premium block blob storage account cannot be tiered to hot, cool, or archive using [Set Blob Tier](/rest/api/storageservices/set-blob-tier) or using Azure Blob Storage lifecycle management. To move data, you must synchronously copy blobs from the block blob storage account to the hot tier in a different account using the [Put Block From URL API](/rest/api/storageservices/put-block-from-url) or a version of AzCopy that supports this API. The **Put Block From URL** API synchronously copies data on the server, meaning the call completes only once all the data is moved from the original server location to the destination location.
## Summary of access tier options
-The following table summarizes the features of the Hot, Cool, and Archive access tiers.
+The following table summarizes the features of the hot, cool, and archive access tiers.
| | **Hot tier** | **Cool tier** | **Archive tier** | |--|--|--|--|
The following table summarizes the features of the Hot, Cool, and Archive access
| **Latency** <br> **(Time to first byte)** | Milliseconds | Milliseconds | Hours<sup>2</sup> | | **Supported redundancy configurations** | All | All | LRS, GRS, and RA-GRS<sup>3</sup> only |
-<sup>1</sup> Objects in the Cool tier on general-purpose v2 accounts have a minimum retention duration of 30 days. For Blob Storage accounts, there's no minimum retention duration for the Cool tier.
+<sup>1</sup> Objects in the cool tier on general-purpose v2 accounts have a minimum retention duration of 30 days. For Blob Storage accounts, there's no minimum retention duration for the cool tier.
-<sup>2</sup> When rehydrating a blob from the Archive tier, you can choose either a standard or high rehydration priority option. Each offers different retrieval latencies and costs. For more information, see [Overview of blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+<sup>2</sup> When rehydrating a blob from the archive tier, you can choose either a standard or high rehydration priority option. Each offers different retrieval latencies and costs. For more information, see [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md).
<sup>3</sup> For more information about redundancy configurations in Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
In addition to the amount of data stored, the cost of storing data varies depend
### Data access costs
-Data access charges increase as the tier gets cooler. For data in the Cool and Archive access tier, you're charged a per-gigabyte data access charge for reads.
+Data access charges increase as the tier gets cooler. For data in the cool and archive access tier, you're charged a per-gigabyte data access charge for reads.
### Transaction costs
Changing the account access tier results in tier change charges for all blobs th
Keep in mind the following billing impacts when changing a blob's tier: - When a blob is uploaded or moved between tiers, it's charged at the corresponding rate immediately upon upload or tier change.-- When a blob is moved to a cooler tier (Hot to Cool, Hot to Archive, or Cool to Archive), the operation is billed as a write operation to the destination tier, where the write operation (per 10,000) and data write (per GB) charges of the destination tier apply.-- When a blob is moved to a warmer tier (Archive to Cool, Archive to Hot, or Cool to Hot), the operation is billed as a read from the source tier, where the read operation (per 10,000) and data retrieval (per GB) charges of the source tier apply. Early deletion charges for any blob moved out of the Cool or Archive tier may apply as well.-- While a blob is being rehydrated from the Archive tier, that blob's data is billed as archived data until the data is restored and the blob's tier changes to Hot or Cool.
+- When a blob is moved to a cooler tier, the operation is billed as a write operation to the destination tier, where the write operation (per 10,000) and data write (per GB) charges of the destination tier apply.
+- When a blob is moved to a warmer tier, the operation is billed as a read from the source tier, where the read operation (per 10,000) and data retrieval (per GB) charges of the source tier apply. Early deletion charges for any blob moved out of the cool or archive tier may apply as well.
+- While a blob is being rehydrated from the archive tier, that blob's data is billed as archived data until the data is restored and the blob's tier changes to hot or cool.
The following table summarizes how tier changes are billed. | | **Write charges (operation + access)** | **Read charges (operation + access)** | | - | -- | -- |
-| **Set Blob Tier** operation | Hot to Cool<br> Hot to Archive<br> Cool to Archive | Archive to Cool<br> Archive to Hot<br> Cool to Hot
+| **Set Blob Tier** operation | Hot to cool<br> Hot to archive<br> Cool to archive | Archive to cool<br> Archive to hot<br> cool to hot
-Changing the access tier for a blob when versioning is enabled, or if the blob has snapshots, may result in additional charges. For information about blobs with versioning enabled, see [Pricing and billing](versioning-overview.md#pricing-and-billing) in the blob versioning documentation. For information about blobs with snapshots, see [Pricing and billing](snapshots-overview.md#pricing-and-billing) in the blob snapshots documentation.
+Changing the access tier for a blob when versioning is enabled, or if the blob has snapshots, may result in more charges. For information about blobs with versioning enabled, see [Pricing and billing](versioning-overview.md#pricing-and-billing) in the blob versioning documentation. For information about blobs with snapshots, see [Pricing and billing](snapshots-overview.md#pricing-and-billing) in the blob snapshots documentation.
## Feature support
storage Archive Rehydrate To Online Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-to-online-tier.md
Previously updated : 08/24/2022 Last updated : 09/29/2022
az storage blob copy start \
#### [AzCopy](#tab/azcopy)
-To copy an archived blob to an online tier with AzCopy, use [azcopy copy](..\common\storage-ref-azcopy-copy.md) command and set the `--block-blob-tier` parameter to the target tier.
-
-> [!NOTE]
-> This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes (''). <br>This example also contains no SAS token because it assumes that you've provided authorization credentials by using Azure Active Directory (Azure AD). See the [Get started with AzCopy](../common/storage-use-azcopy-v10.md) article to learn about the ways that you can provide authorization credentials to the storage service.
-
-```azcopy
-azcopy copy 'https://mystorageeaccount.blob.core.windows.net/mysourcecontainer/myTextFile.txt' 'https://mystorageaccount.blob.core.windows.net/mydestinationcontainer/myTextFile.txt' --block-blob-tier=hot
-```
-
-The copy operation is synchronous so when the command returns, it indicates that all files have been copied.
+N/A
az storage blob copy start \
#### [AzCopy](#tab/azcopy)
-To copy an archived blob to a blob in an online tier in a different storage account with AzCopy, use [azcopy copy](..\common\storage-ref-azcopy-copy.md) command and set the `--block-blob-tier` parameter to the target tier.
-
-> [!NOTE]
-> This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes (''). <br>This example also contains no SAS token because it assumes that you've provided authorization credentials by using Azure Active Directory (Azure AD). See the [Get started with AzCopy](../common/storage-use-azcopy-v10.md) article to learn about the ways that you can provide authorization credentials to the storage service.
-
-```azcopy
-azcopy copy 'https://mysourceaccount.blob.core.windows.net/mycontainer/myTextFile.txt' 'https://mydestinationaccount.blob.core.windows.net/mycontainer/myTextFile.txt' --block-blob-tier=hot
-```
-
-The copy operation is synchronous so when the command returns, it indicates that all files have been copied.
+N/A
storage Data Lake Storage Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-best-practices.md
Previously updated : 12/17/2021 Last updated : 09/29/2022
For general suggestions around structuring a data lake, see these articles:
## Find documentation
-Azure Data Lake Storage Gen2 is not a dedicated service or account type. It's a set of capabilities that support high throughput analytic workloads. The Data Lake Storage Gen2 documentation provides best practices and guidance for using these capabilities. Refer to the [Blob storage documentation](storage-blobs-introduction.md) content, for all other aspects of account management such as setting up network security, designing for high availability, and disaster recovery.
+Azure Data Lake Storage Gen2 isn't a dedicated service or account type. It's a set of capabilities that support high throughput analytic workloads. The Data Lake Storage Gen2 documentation provides best practices and guidance for using these capabilities. For all other aspects of account management such as setting up network security, designing for high availability, and disaster recovery, see the [Blob storage documentation](storage-blobs-introduction.md) content.
#### Evaluate feature support and known issues
Use the following pattern as you configure your account to use Blob storage feat
#### Understand the terms used in documentation
-As you move between content sets, you'll notice some slight terminology differences. For example, content featured in the [Blob storage documentation](storage-blobs-introduction.md), will use the term *blob* instead of *file*. Technically, the files that you ingest to your storage account become blobs in your account. Therefore, the term is correct. However, this can cause confusion if you're used to the term *file*. You'll also see the term *container* used to refer to a *file system*. Consider these terms as synonymous.
+As you move between content sets, you'll notice some slight terminology differences. For example, content featured in the [Blob storage documentation](storage-blobs-introduction.md), will use the term *blob* instead of *file*. Technically, the files that you ingest to your storage account become blobs in your account. Therefore, the term is correct. However, the term *blob* can cause confusion if you're used to the term *file*. You'll also see the term *container* used to refer to a *file system*. Consider these terms as synonymous.
## Consider premium
-If your workloads require a low consistent latency and/or require a high number of input output operations per second (IOP), consider using a premium block blob storage account. This type of account makes data available via high-performance hardware. Data is stored on solid-state drives (SSDs) which are optimized for low latency. SSDs provide higher throughput compared to traditional hard drives. The storage costs of premium performance are higher, but transaction costs are lower, so if your workloads execute a large number of transactions, a premium performance block blob account can be economical.
+If your workloads require a low consistent latency and/or require a high number of input output operations per second (IOP), consider using a premium block blob storage account. This type of account makes data available via high-performance hardware. Data is stored on solid-state drives (SSDs) which are optimized for low latency. SSDs provide higher throughput compared to traditional hard drives. The storage costs of premium performance are higher, but transaction costs are lower. Therefore, if your workloads execute a large number of transactions, a premium performance block blob account can be economical.
If your storage account is going to be used for analytics, we highly recommend that you use Azure Data Lake Storage Gen2 along with a premium block blob storage account. This combination of using premium block blob storage accounts along with a Data Lake Storage enabled account is referred to as the [premium tier for Azure Data Lake Storage](premium-tier-for-data-lake-storage.md).
When ingesting data from a source system, the source hardware, source network ha
### Source hardware
-Whether you are using on-premises machines or Virtual Machines (VMs) in Azure, make sure to carefully select the appropriate hardware. For disk hardware, consider using Solid State Drives (SSD) and pick disk hardware that has faster spindles. For network hardware, use the fastest Network Interface Controllers (NIC) as possible. On Azure, we recommend Azure D14 VMs, which have the appropriately powerful disk and networking hardware.
+Whether you're using on-premises machines or Virtual Machines (VMs) in Azure, make sure to carefully select the appropriate hardware. For disk hardware, consider using Solid State Drives (SSD) and pick disk hardware that has faster spindles. For network hardware, use the fastest Network Interface Controllers (NIC) as possible. On Azure, we recommend Azure D14 VMs, which have the appropriately powerful disk and networking hardware.
### Network connectivity to the storage account
Consider pre-planning the structure of your data. File format, file size, and di
Data can be ingested in various formats. Data can be appear in human readable formats such as JSON, CSV, or XML or as compressed binary formats such as `.tar.gz`. Data can come in various sizes as well. Data can be composed of large files (a few terabytes) such as data from an export of a SQL table from your on-premises systems. Data can also come in the form of a large number of tiny files (a few kilobytes) such as data from real-time events from an Internet of things (IoT) solution. You can optimize efficiency and costs by choosing an appropriate file format and file size.
-Hadoop supports a set of file formats that are optimized for storing and processing structured data. Some common formats are Avro, Parquet, and Optimized Row Columnar (ORC) format. All of these formats are machine-readable binary file formats. They are compressed to help you manage file size. They have a schema embedded in each file, which makes them self-describing. The difference between these formats is in how data is stored. Avro stores data in a row-based format and the Parquet and ORC formats store data in a columnar format.
+Hadoop supports a set of file formats that are optimized for storing and processing structured data. Some common formats are Avro, Parquet, and Optimized Row Columnar (ORC) format. All of these formats are machine-readable binary file formats. They're compressed to help you manage file size. They have a schema embedded in each file, which makes them self-describing. The difference between these formats is in how data is stored. Avro stores data in a row-based format and the Parquet and ORC formats store data in a columnar format.
-Consider using the Avro file format in cases where your I/O patterns are more write heavy, or the query patterns favor retrieving multiple rows of records in their entirety. For example, the Avro format works well with a message bus such as Event Hub or Kafka that write multiple events/messages in succession.
+Consider using the Avro file format in cases where your I/O patterns are more write heavy, or the query patterns favor retrieving multiple rows of records in their entirety. For example, the Avro format works well with a message bus such as Event Hubs or Kafka that write multiple events/messages in succession.
Consider Parquet and ORC file formats when the I/O patterns are more read heavy or when the query patterns are focused on a subset of columns in the records. Read transactions can be optimized to retrieve specific columns instead of reading the entire record.
Larger files lead to better performance and reduced costs.
Typically, analytics engines such as HDInsight have a per-file overhead that involves tasks such as listing, checking access, and performing various metadata operations. If you store your data as many small files, this can negatively affect performance. In general, organize your data into larger sized files for better performance (256 MB to 100 GB in size). Some engines and applications might have trouble efficiently processing files that are greater than 100 GB in size.
-Increasing file size can also reduce transaction costs. Read and write operations are billed in 4 megabyte increments so you're charged for operation whether or not the file contains 4 megabytes or only a few kilobytes. For pricing information, see [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/).
+Increasing file size can also reduce transaction costs. Read and write operations are billed in 4-megabyte increments so you're charged for operation whether or not the file contains 4 megabytes or only a few kilobytes. For pricing information, see [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/).
-Sometimes, data pipelines have limited control over the raw data, which has lots of small files. In general, we recommend that your system have some sort of process to aggregate small files into larger ones for use by downstream applications. If you're processing data in real time, you can use a real time streaming engine (such as [Azure Stream Analytics](../../stream-analytics/stream-analytics-introduction.md) or [Spark Streaming](https://databricks.com/glossary/what-is-spark-streaming)) together with a message broker (such as [Event Hub](../../event-hubs/event-hubs-about.md) or [Apache Kafka](https://kafka.apache.org/)) to store your data as larger files. As you aggregate small files into larger ones, consider saving them in a read-optimized format such as [Apache Parquet](https://parquet.apache.org/) for downstream processing.
+Sometimes, data pipelines have limited control over the raw data, which has lots of small files. In general, we recommend that your system have some sort of process to aggregate small files into larger ones for use by downstream applications. If you're processing data in real time, you can use a real time streaming engine (such as [Azure Stream Analytics](../../stream-analytics/stream-analytics-introduction.md) or [Spark Streaming](https://databricks.com/glossary/what-is-spark-streaming)) together with a message broker (such as [Event Hubs](../../event-hubs/event-hubs-about.md) or [Apache Kafka](https://kafka.apache.org/)) to store your data as larger files. As you aggregate small files into larger ones, consider saving them in a read-optimized format such as [Apache Parquet](https://parquet.apache.org/) for downstream processing.
### Directory structure
The following table recommends tools that you can use to ingest, analyze, visual
## Monitor telemetry
-Monitoring use and performance is an important part of operationalizing your service. Examples include frequent operations, operations with high latency, or operations that cause service-side throttling.
+Monitoring the use and performance is an important part of operationalizing your service. Examples include frequent operations, operations with high latency, or operations that cause service-side throttling.
All of the telemetry for your storage account is available through [Azure Storage logs in Azure Monitor](monitor-blob-storage.md). This feature integrates your storage account with Log Analytics and Event Hubs, while also enabling you to archive logs to another storage account. To see the full list of metrics and resources logs and their associated schema, see [Azure Storage monitoring data reference](monitor-blob-storage-reference.md).
-Where you choose to store your logs depends on how you plan to access them. For example, if you want to access your logs in near real time, and be able to correlate events in logs with other metrics from Azure Monitor, you can store your logs in a Log Analytics workspace. This allows you to query your logs using KQL and author queries, which enumerate the `StorageBlobLogs` table in your workspace.
+Where you choose to store your logs depends on how you plan to access them. For example, if you want to access your logs in near real time, and be able to correlate events in logs with other metrics from Azure Monitor, you can store your logs in a Log Analytics workspace. Then, query your logs by using KQL and author queries, which enumerate the `StorageBlobLogs` table in your workspace.
If you want to store your logs for both near real-time query and long term retention, you can configure your diagnostic settings to send logs to both a Log Analytics workspace and a storage account.
-If you want to access your logs through another query engine such as Splunk, you can configure your diagnostic settings to send logs to an Event Hub and ingest logs from the Event Hub to your chosen destination.
+If you want to access your logs through another query engine such as Splunk, you can configure your diagnostic settings to send logs to an event hub and ingest logs from the event hub to your chosen destination.
Azure Storage logs in Azure Monitor can be enabled through the Azure portal, PowerShell, the Azure CLI, and Azure Resource Manager templates. For at-scale deployments, Azure Policy can be used with full support for remediation tasks. For more information, see [Azure/Community-Policy](https://github.com/Azure/Community-Policy/tree/master/Policies/Storage/deploy-storage-monitoring-log-analytics) and [ciphertxt/AzureStoragePolicy](https://github.com/ciphertxt/AzureStoragePolicy).
storage Data Lake Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-introduction.md
description: Read an introduction to Azure Data Lake Storage Gen2. Learn key fea
Previously updated : 02/25/2020 Last updated : 02/23/2022
A fundamental part of Data Lake Storage Gen2 is the addition of a [hierarchical
Data Lake Storage Gen2 builds on Blob storage and enhances performance, management, and security in the following ways: -- **Performance** is optimized because you do not need to copy or transform data as a prerequisite for analysis. Compared to the flat namespace on Blob storage, the hierarchical namespace greatly improves the performance of directory management operations, which improves overall job performance.
+- **Performance** is optimized because you don't need to copy or transform data as a prerequisite for analysis. Compared to the flat namespace on Blob storage, the hierarchical namespace greatly improves the performance of directory management operations, which improves overall job performance.
- **Management** is easier because you can organize and manipulate files through directories and subdirectories. - **Security** is enforceable because you can define POSIX permissions on directories or individual files.
-Also, Data Lake Storage Gen2 is very cost effective because it is built on top of the low-cost [Azure Blob Storage](storage-blobs-introduction.md). The additional features further lower the total cost of ownership for running big data analytics on Azure.
+Also, Data Lake Storage Gen2 is very cost effective because it's built on top of the low-cost [Azure Blob Storage](storage-blobs-introduction.md). The extra features further lower the total cost of ownership for running big data analytics on Azure.
## Key features of Data Lake Storage Gen2
Also, Data Lake Storage Gen2 is very cost effective because it is built on top o
### Scalability
-Azure Storage is scalable by design whether you access via Data Lake Storage Gen2 or Blob storage interfaces. It is able to store and serve *many exabytes of data*. This amount of storage is available with throughput measured in gigabits per second (Gbps) at high levels of input/output operations per second (IOPS). Processing is executed at near-constant per-request latencies that are measured at the service, account, and file levels.
+Azure Storage is scalable by design whether you access via Data Lake Storage Gen2 or Blob storage interfaces. It's able to store and serve *many exabytes of data*. This amount of storage is available with throughput measured in gigabits per second (Gbps) at high levels of input/output operations per second (IOPS). Processing is executed at near-constant per-request latencies that are measured at the service, account, and file levels.
### Cost effectiveness
The following are the equivalent entities, as described by different concepts. U
| Concept | Top Level Organization | Lower Level Organization | Data Container | |-|||-|
-| Blobs - General purpose object storage | Container | Virtual directory (SDK only - does not provide atomic manipulation) | Blob |
+| Blobs - General purpose object storage | Container | Virtual directory (SDK only - doesn't provide atomic manipulation) | Blob |
| Azure Data Lake Storage Gen2 - Analytics Storage | Container | Directory | File | ## Supported Blob Storage features
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
description: Use Azure Storage lifecycle management policies to create automated
Previously updated : 08/24/2022 Last updated : 09/29/2022
For more information about pricing, see [Block Blob pricing](https://azure.micro
### I created a new policy. Why do the actions not run immediately?
-The platform runs the lifecycle policy once a day. Once you configure a policy, it can take up to 24 hours for some actions to run for the first time.
+The platform runs the lifecycle policy once a day. Once you configure a policy, it can take up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for some actions to run for the first time.
### If I update an existing policy, how long does it take for the actions to run?
The updated policy takes up to 24 hours to go into effect. Once the policy is in
### I rehydrated an archived blob. How do I prevent it from being moved back to the Archive tier temporarily?
-If there's a lifecycle management policy in effect for the storage account, then rehydrating a blob by changing it's tier can result in a scenario where the lifecycle policy moves the blob back to the archive tier. This can happen if the last modified time, creation time, or last access time is beyond the threshold set for the policy. There's three ways to prevent this from happening:
+If there's a lifecycle management policy in effect for the storage account, then rehydrating a blob by changing its tier can result in a scenario where the lifecycle policy moves the blob back to the archive tier. This can happen if the last modified time, creation time, or last access time is beyond the threshold set for the policy. There's three ways to prevent this from happening:
- Add the `daysAfterLastTierChangeGreaterThan` condition to the tierToArchive action of the policy. This condition applies only to the last modified time. See [Use lifecycle management policies to archive blobs](archive-blob.md#use-lifecycle-management-policies-to-archive-blobs). - Disable the rule that affects this blob temporarily to prevent it from being archived again. Re-enable the rule when the blob can be safely moved back to archive tier. -- If the blob needs to stay in the hot or cool tier permanently, copy the blob to another location where the lifecycle manage policy is not in effect.
+- If the blob needs to stay in the hot or cool tier permanently, copy the blob to another location where the lifecycle manage policy isn't in effect.
### The blob prefix match string didn't apply the policy to the expected blobs
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
Previously updated : 09/13/2022 Last updated : 09/28/2022
This article contains a list of valid host keys used to connect to Azure Blob Storage from SFTP clients.
-Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to leverage SFTP for file access, file transfer, as well as file management. For more information, see [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md).
+Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage via an SFTP endpoint, allowing you to leverage SFTP for file access, file transfer, as well as file management. For more information, see [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md).
When you connect to Blob Storage by using an SFTP client, you might be prompted to trust a host key. During the public preview, you can verify the host key by finding that key in the list presented in this article.
When you connect to Blob Storage by using an SFTP client, you might be prompted
> | Norway West | rsa-sha2-512 | `uHGfIB97I8y8nSAEciD7InBKzAx9ui5xQHAXIUo6gdE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDPXLVCb1kqh8gERY43bvyPcfxVOUnZyWsHkEK5+QT6D7ttThO2alZbnAPMhMGpAzJieT1IArRbCjmssWQmJrhTGXSJBsi75zmku4vN+UB712EGXm308/TvClN0wlnFwFI9RWXonDBkUN1WjZnUoQuN+JNZ7ybApHEgyaiHkJfhdrtTkfzGLHqyMnESUvnEJkexLDog88xZVNL7qJTSJlq1m32JEAEDgTuO4Wb7IIr92s6GOFXKukwY8dRldXCaJvjwfBz5MEdPknvipwTHYlxYzpcCtb9qnOliDLD2g4gm9d5nq3QBlLj/4cS1M9trkAxQQfUmuVQooXfO2Zw+fOW1` | > | Norway West | ecdsa-sha2-nistp256 | `muljUcRHpId06YvSLxboTHWmq0pUXxH6QRZHspsLZvs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOefohG21zu2JGcUvjk/qlz5sxhJcy5Vpk5Etj3cgmE/BuOTt5GR4HHpbcj/hrLxGRmAWhBV7uVMqO376pwsOBs=` | > | Norway West | ecdsa-sha2-nistp384 | `QlzJV54Ggw1AObztQjGt/J2TQ1kTiTtJDcxxIdCtWYE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYnNgJKaYCByLPdh21ZYEV/I4FNSZ4RWxK4bMDgNo/53HROhQmezQgoDvJFWsQiFVDXOPLXf26OeVXJ7qXAm6vS+17Z7E1iHkrqo2MqnlMTYzvBOgYNFp9GfW6lkDYfiQ==` |
+> | US Gov Virginia | ecdsa-sha2-nistp256 | `RQCpx04JVJt2SWSlBdpItBBpxGCPnMxkv6TBrwtwt54` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD7FjQs4/JsT0BS3Fk8gnOFGNRmNIKH0/pAFpUnTdh7mci4FvCS2Wl/pOi3Vzjcq+IaMa9kUuZZ94QejGQ7nY/U=` |
+> | US Gov Virginia | ecdsa-sha2-nistp384 | `eR/fcgyjTj13I9qAif2SxSfoixS8vuPh++3emjUdZWU` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKtxuygqAi2rrc+mX2GzMqHXHQwhspWFthBveUglUB8mAELFBSwEQwyETZpMuUKgFd//fia6NTfpq2d2CWPUcNjLu041n0f3ZUbDIh8To3zT7K+5nthxWURz3vWEXdPlKQ==` |
+> | US Gov Virginia | rsa-sha2-256 | `/ItawLaQuYeKzMjZWbHOrUk1NWnsd63zPsWVFVtTWK0` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC87Alyx0GHEYiPTqsLcGI2bjwk/iaSKrJmQOBClBrS23wwyH/7rc/yDlyc3X8jqLvE6E8gx7zc+y3yPcWP1/6XwA8fVPyrY+v8JYlHL/nWiadFCXYc8p3s8aNeGQwqKsaObMGw55T/bPnm7vRpQNlFFLA9dtz42tTyQg+BvNVFJAIb8/YOMTLYG+Q9ZGfPEmdP6RrLvf2vM19R/pIxJVq5Xynt2hJp1dUiHim/D+x9aesARoW/dMFmsFscHQnjPbbCjU5Zk977IMIbER2FMHBcPAKGRnKVS9Z7cOKl/C71s0PeeNWNrqDLnPYd60ndRCrVmXAYLUAeE6XR8fFb2SPd` |
+> | US Gov Virginia | rsa-sha2-512 | `0SbDc5jI2bioFnP9ljPzMsAEYty0QiLbsq1qvWBHGK4` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNu4Oori191gsGb8rlj1XCrGW/Qtnj6rrSQK2iy7mtdzv9yyND1GLWyNKkKo4F3+MAUX3GCMIYlHEv1ucl7JrJQ58/u7pR59wN18Ehf+tU8i1EirQWRhlgvkbFfV9BPb7m6SOhfmOKSzgc1dEnTawskCXe+5Auk33SwtWEFh560N5YGC5vvTiXEuEovblg/RQRwj+` |
+> | US Gov Arizona | ecdsa-sha2-nistp256 | `NVCEDFMJplIVFSg34krIni9TGspma70KOmlYuvCVj7M` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKM1pvnkaX5Z9yaJANtlYVZYilpg0I+MB1t2y2pXCRJWy8TSTH/1xDLSsN29QvkZN68cs5774CtazYsLUjpsK04=` |
+> | US Gov Arizona | ecdsa-sha2-nistp384 | `CsqmZyqRDf5YKVt52zDgl6MOlfzvhvlJ0W+afH7TS5o` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwIkowKaWm5o8cyM4r6jW39uHf9oS3A5aVqnpZMWBU48LrONSeQBTj0oW7IGFRujBVASn/ejk25kwaNAzm9HT4ATBFToE3YGqPVoLtJO27wGvlGdefmAvv7q5Y7AEilhw==` |
+> | US Gov Arizona | rsa-sha2-256 | `lzreQ6XfJG0sLQVXC9X52O76E0D/7dzETSoreA9cPsI` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCt8cRUseER/kSeSzD6i2rxlxHinn2uqVFtoQQGeyW2g8CtfgzjOr4BVB7Z6Bs2iIkzNGgbnKWOj8ROBmAV4YBesEgf7ZXI+YD5vXtgDCV+Mnp1pwlN8mC6ood4dh+6pSOg2dSauYSN59zRUEjnwOwmmETSUWXcjIs2fWXyneYqUZdd5hojj5mbHliqvuvu0D6IX/Id7CRh9VA13VNAp1fJ8TPUyT7d2xiBhUNWgpMB3Y96V/LNXjKHWtd9gCm96apgx215ev+wAz6BzbrGB19K5c5bxd6XGqCvm924o/y2U5TUE8kTniSFPwT/dNFSGxdBtXk23ng1yrfYE/48CcS5` |
+> | US Gov Arizona | rsa-sha2-512 | `dezlFAhCxrM3XwuCFW4PEWTzPShALMW/5qIHYSRiTZQ` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDIAphA39+aUBaDkAhjJwhZK37mKfH0Xk3W3hepz+NwJ5V/NtrHgAHtnlrWiq/F7mDM0Xa++p7mbJNAhq9iT2vhQLX/hz8ibBRz8Kz6PutYuOtapftWz7trUJXMAI1ASOWjHbOffxeQwhUt2n0HmojFp4CoeYIoLIJiZNl8SkTJir3kUjHunIvvKRcIS0FBjEG9OfdJlo0k3U2nj5QLCORw8LzxfmqjmapRRfGQct/XmkJQM5bjUTcLW7vCkrx+EtHbnHtG+q+msnoP/GIwO3qMEgRvgxRnTctV82T8hmOz+6w1loO6B8qwAFt6tnsq2+zQvNdvOwRz/o+X8YWLGIzN` |
+> | US Gov Texas | ecdsa-sha2-nistp256 | `osmHklvhKEbYW8ViKXaF0uG+bnYlCSp1XEInnzoYaWs` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNjvs/Cy4EODF21qEafVDBjL4JQ5s4m87htOESPjMAvNoZ3vfRtJy81MB7Fk6IqJcavqwFas8e3FNRcWBVseOqM=` |
+> | US Gov Texas | ecdsa-sha2-nistp384 | `MIJbuk4de6NBeStxcfCaU0o8zAemBErm4GSFFwoyivQ` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxPcJV0UdTiqah2XeXvfGgIU8zQkmb6oeJxRtZnumlbu5DfrhaMibo3VgSK7HUphavc6DORSAKdFHoGnPHBO981FWmd9hqxJztn2KKpdyZALfhjgu0ySN2gso7kUpaxIA==` |
+> | US Gov Texas | rsa-sha2-256 | `IL6063PFm771JPM4bDuaKiireq8L7AZP+B9/DaiJ2sI` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDUTuQSTyQiJdXfDt9wfn9EpePO0SPMd+AtBNhYx1sTUbWNzBpHygfJlt2n0itodnFQ3d0fGZgxE/wHdG6zOy77pWU8i95YcxjdF+DMMY3j87uqZ8ZFk4t0YwIooAHvaBqw/PwtHYnTBr82T383pAasJTiFEd3GNDYIRgW5TZ4nnA26VoNUlUBaUXPUBfPvvqLrgcv8GBvV/MESSJTQDz1UegCqd6dGGfwdn2CWhkSjGcl17le/suND/fC5ZrvTkRNWfyeJlDkN4F+UpSUfvalBLV+QYv4ZJxsT4VagQ9n6wTBTDAvMu3CTP8XmAYEIGLf9YCbjxcTC+UywaL1Nk++x` |
+> | US Gov Texas | rsa-sha2-512 | `NZo9nBE/L1k6QyUcQZ5GV/0yg6rU2RTUFl+zvlvZvB4` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCwNs5md1kYKAxFruSF+I4qS1IOuKw6LS9oJpcASnXpPi//PI5aXlLpy5AmeePEHgF+O0pSNs6uGWC+/T2kYsYkTvIieSQEzyXfV+ZDVqCHBZuezoM0tQxc9tMLr8dUExow1QY5yizj35s1hPHjr2EQThCLhl5M0g3s+ktKMb77zNX7DA3eKhRnK/ulOtMmewrGDg9/ooOa7ZWIIPPY0mUDs5Get/EWF1KCOABOacdkXZOPoUaD0fTEOhU+xd66CBRuk9SIFGWmQw2GiBoeF0432sEAfc3ZptyzSmCamjtsfihFeHXUij8MH8UiTZopV3JjUO6xN7MCx9BJFcRxtEQF` |
+> | US DoD |East | ecdsa-sha2-nistp256 | `dk3jE5LOhsxfdaeeRPmuQ33z/ZO55XRLo8FA3I6YqAk` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD7vMN0MTHRlUB8/35XBfYIhk8RZjwHyh6GrIDHgsjQPiZKUO/blq6qZ57WRmWmo7F+Rtw6Rfiub53a6+yZfgB4=` |
+> | US DoD East | ecdsa-sha2-nistp384 | `6nTqoKVqqpBl7k9m/6joVb+pIqKvdssxO5JRPkiPYeE` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOwn2WSEmmec+DlJjPe0kjrdEmN/6tIQhN8HxQMq/G81c/FndVVFo97HQBYzo1SxCLCwZJRYQwFef3FWBzKFK7bqtpB055LM58FZv59QNCIXxF+wafqWolrKNGyL8k2Vvw==` |
+> | US DoD East | rsa-sha2-256 | `xzDw4ZHUTvtpy/GElnkDg95GRD8Wwj7+AuvCUcpIEVo` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDrAT5kTs5GXMoc+fSX1VScJ4uOFAaeA7i1CVZyWCcVNJrz2iHyZAncdxJ86BS8O2DceOpzjiFHr6wvg2OrFmByamDAVQCZQLPm+XfYV7Xk0cxZYk5RzNDQV87hEPYprNgZgPuM3tLyHVg76Zhx5LDhX7QujOIVIxQLkJaMJ/GIT+tOWzPOhxpWOGEXiifi4MNp/0uwyKbueoX7V933Bu2fz0VMJdKkprS5mXnZdcM9Y/ZvPFeKaX55ussBgcdfjaeK3emwdUUy4SaLMaTG6b1TgVaTQehMvC8ufZ3qfpwSGnuHrz1t7gKdB3w7/Q7UFXtBatWroZ10dnyZ/9Nn4V5R` |
+> | US DoD East | rsa-sha2-512 | `3rvLtZPtROldWm2TCI//vI8IW0RGSbvlrHSU4e4BQcA` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDrAT5kTs5GXMoc+fSX1VScJ4uOFAaeA7i1CVZyWCcVNJrz2iHyZAncdxJ86BS8O2DceOpzjiFHr6wvg2OrFmByamDAVQCZQLPm+XfYV7Xk0cxZYk5RzNDQV87hEPYprNgZgPuM3tLyHVg76Zhx5LDhX7QujOIVIxQLkJaMJ/GIT+tOWzPOhxpWOGEXiifi4MNp/0uwyKbueoX7V933Bu2fz0VMJdKkprS5mXnZdcM9Y/ZvPFeKaX55ussBgcdfjaeK3emwdUUy4SaLMaTG6b1TgVaTQehMvC8ufZ3qfpwSGnuHrz1t7gKdB3w7/Q7UFXtBatWroZ10dnyZ/9Nn4V5R` |
+> | US DoD Central | ecdsa-sha2-nistp256 | `03WHYAk6NEf2qYT62cwilvrkQ8rZCwdi+9M6yTZ9zjc` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCVsp8VO4aE6PwKD4nKZDU0xNx2CyNvw7xU3/KjXgTPWqNpbOlr6JmHG67ozOj+JUtLRMX15cLbDJgX9G9/EZd8=` |
+> | US DoD Central | ecdsa-sha2-nistp384 | `do10RyIoAbeuNClEvjfq5OvNTbcjKO6PPaCm1cGiFDA` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKYiTs82RA54EX24BESc5hFy5Zd+bPo4UTI/QFn+koMnv2QWSc9SYIumaVtl0bIWnEvdlOA4F2IJ1hU5emvDHM2syOPxK7wTPms9uLtOJBNekQaAUw61CJZ4LWlPQorYNQ==` |
+> | US DoD Central | rsa-sha2-256 | `htGg4hqLQo4QQ92GBDJBqo7KfMwpKpzs9KyB07jyT9w` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDVHNOQQpJY9Etaxa+XKttw4qkhS9ZsZBpNIsEM4UmfAq6yMmtXo1EXZ/LDt4uALIcHdt3tuEkt0kZ/d3CB+0oQggqaBXcr9ueJBofoyCwoW+QcPho5GSE5ecoFEMLG/u4RIXhDTIms/8MDiCvbquUBbR3QBh5I2d6mKJJej0cBeAH/Sh7+U+30hJqnrDm4BMA2F6Hztf19nzAmw7LotlH5SLMEOGVdzl28rMeDZ+O3qwyZJJyeXei1BiYFmOZDg4FjG9sEDwMTRnTQHNj2drNtRqWt46kjQ1MjEscoy8N/MlcZtGj1tKURL909l3tUi3fIth4eAxMaAkq023/mOK1x` |
+> | US DoD Central | rsa-sha2-512 | `ho5JpqNw8wV20XjrDWy/zycyUMwUASinQd0gj8AJbkE` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCT/6XYwIYUBHLTaHW8q7jE2fdLMWZpf1ohdrUXkfSksL3V8NeZ3j12Jm/MyZo4tURpPPcWJKT+0zcEyon9/AfBi6lpxhKUZQfgWQo7fUBDy1K4hyVt9IcnmNb22kX8y3Y6u/afeqCR8ukPd0uBhRYyzZWvyHzfVjXYSkw2ShxCRRQz4RjaljoSPPZIGFa2faBG8NQgyuCER8mZ72T3aq8YSUmWvpSojzfLr7roAEJdPHyRPFzM/jy1FSEanEuf6kF1Y+i1AbbH0dFDLU7AdxfCB4sHSmy6Xxnk7yYg5PYuxog7MH27wbg4+3+qUhBNcoNU33RNF9TdfVU++xNhOTH1` |
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
Title: Connect to Azure Blob Storage using SFTP (preview) | Microsoft Docs
-description: Learn how to enable SFTP support for your Azure Blob Storage account so that you can directly connect to your Azure Storage account by using an SFTP client.
+description: Learn how to enable SFTP support for Azure Blob Storage so that you can directly connect to your Azure Storage account by using an SFTP client.
Previously updated : 09/15/2022 Last updated : 09/29/2022
To learn more about the SFTP permissions model, see [SFTP Permissions model](sec
> [!IMPORTANT] > While you can enable both forms of authentication, SFTP clients can connect by using only one of them. Multifactor authentication, whereby both a valid password and a valid public and private key pair are required for successful authentication is not supported.
- If you select **SSH Password**, then your password will appear when you've completed all of the steps in the **Add local user** configuration pane. Note that SSH passwords are generated by Azure and are minimum 88 characters in length.
+ If you select **SSH Password**, then your password will appear when you've completed all of the steps in the **Add local user** configuration pane. SSH passwords are generated by Azure and are minimum 88 characters in length.
If you select **SSH Key pair**, then select **Public key source** to specify a key source.
To learn more about the SFTP permissions model, see [SFTP Permissions model](sec
$sshkey = New-AzStorageLocalUserSshPublicKey -Key $sshkey -Description "description for ssh public key" ```
-4. Create a local user by using the **Set-AzStorageLocalUser** command. Set the `-PermissionScope` parameter to the permission scope object that you created earlier. If you are using an SSH key, then set the `SshAuthorization` parameter to the public key object that you created in the previous step. If you want to use a password to authenticate this local user, then set the `-HasSshPassword` parameter to `$true`.
+4. Create a local user by using the **Set-AzStorageLocalUser** command. Set the `-PermissionScope` parameter to the permission scope object that you created earlier. If you're using an SSH key, then set the `SshAuthorization` parameter to the public key object that you created in the previous step. If you want to use a password to authenticate this local user, then set the `-HasSshPassword` parameter to `$true`.
The following example creates a local user and then prints the key and permission scopes to the console.
See the documentation of your SFTP client for guidance about how to connect and
## Connect using a custom domain
-When using custom domains the connection string is `myaccount.myuser@customdomain.com`. If home directory has not been specified for the user, it is `myaccount.mycontainer.myuser@customdomain.com`.
+When using custom domains the connection string is `myaccount.myuser@customdomain.com`. If home directory hasn't been specified for the user, it's `myaccount.mycontainer.myuser@customdomain.com`.
> [!IMPORTANT] > Ensure your DNS provider does not proxy requests. Proxying may cause the connection attempt to time out. ## Connect using a private endpoint
-When using a private endpoint the connection string is `myaccount.myuser@myaccount.privatelink.blob.core.windows.net`. If home directory has not been specified for the user, it is `myaccount.mycontainer.myuser@myaccount.privatelink.blob.core.windows.net`.
+When using a private endpoint the connection string is `myaccount.myuser@myaccount.privatelink.blob.core.windows.net`. If home directory hasn't been specified for the user, it's `myaccount.mycontainer.myuser@myaccount.privatelink.blob.core.windows.net`.
> [!NOTE] > Ensure you change networking configuration to "Enabled from selected virtual networks and IP addresses" and select your private endpoint, otherwise the regular SFTP endpoint will still be publicly accessible. ## Networking considerations
-SFTP is a platform level service, so port 22 will be open even if the account option is disabled. If SFTP access is not configured then all requests will receive a disconnect from the service. When using SFTP, you may want to limit public access through configuration of a firewall, virtual network, or private endpoint. These settings are enforced at the application layer, which means they are not specific to SFTP and will impact connectivity to all Azure Storage Endpoints. For more information on firewalls and network configuration, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md).
+SFTP is a platform level service, so port 22 will be open even if the account option is disabled. If SFTP access is not configured, then all requests will receive a disconnect from the service. When using SFTP, you may want to limit public access through configuration of a firewall, virtual network, or private endpoint. These settings are enforced at the application layer, which means they aren't specific to SFTP and will impact connectivity to all Azure Storage Endpoints. For more information on firewalls and network configuration, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md).
> [!NOTE] > Audit tools that attempt to determine TLS support at the protocol layer may return TLS versions in addition to the minimum required version when run directly against the storage account endpoint. For more information, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a storage account](../common/transport-layer-security-configure-minimum-version.md).
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Previously updated : 09/13/2022 Last updated : 09/29/2022
# SSH File Transfer Protocol (SFTP) support for Azure Blob Storage (preview)
-Blob storage now supports the SSH File Transfer Protocol (SFTP). This support provides the ability to securely connect to Blob Storage accounts via an SFTP endpoint, allowing you to use SFTP for file access, file transfer, and file management.
+Blob storage now supports the SSH File Transfer Protocol (SFTP). This support lets you securely connect to Blob Storage via an SFTP endpoint, allowing you to use SFTP for file access, file transfer, and file management.
> [!IMPORTANT] > SFTP support is currently in PREVIEW.
You can authenticate local users connecting via SFTP by using a password or a Se
#### Passwords
-You cannot set custom passwords, rather Azure generates one for you. If you choose password authentication, then your password will be provided after you finish configuring a local user. Make sure to copy that password and save it in a location where you can find it later. You won't be able to retrieve that password from Azure again. If you lose the password, you'll have to generate a new one. For security reasons, you can't set the password yourself.
+You can't set custom passwords, rather Azure generates one for you. If you choose password authentication, then your password will be provided after you finish configuring a local user. Make sure to copy that password and save it in a location where you can find it later. You won't be able to retrieve that password from Azure again. If you lose the password, you'll have to generate a new one. For security reasons, you can't set the password yourself.
#### SSH key pairs
When performing write operations on blobs in sub directories, Read permission is
## Home directory
-As you configure permissions, you have the option of setting a home directory for the local user. If no other container is specified in an SFTP connection request, then this is the directory that the user connects to by default. For example, consider the following request made by using [Open SSH](/windows-server/administration/openssh/openssh_overview). This request doesn't specify a container or directory name as part of the `sftp` command.
+As you configure permissions, you have the option of setting a home directory for the local user. If no other container is specified in an SFTP connection request, then the home directory is the directory that the user connects to by default. For example, consider the following request made by using [Open SSH](/windows-server/administration/openssh/openssh_overview). This request doesn't specify a container or directory name as part of the `sftp` command.
```powershell sftp myaccount.myusername@myaccount.blob.core.windows.net
You can use many different SFTP clients to securely connect and then transfer fi
SFTP support for Azure Blob Storage currently limits its cryptographic algorithm support based on security considerations. We strongly recommend that customers utilize [Microsoft Security Development Lifecycle (SDL) approved algorithms](/security/sdl/cryptographic-recommendations) to securely access their data.
-At this time, in accordance with the Microsoft Security SDL, we do not plan on supporting the following: `ssh-dss`, `diffie-hellman-group14-sha1`, `diffie-hellman-group1-sha1`, `hmac-sha1`, `hmac-sha1-96`. Algorithm support is subject to change in the future.
+At this time, in accordance with the Microsoft Security SDL, we don't plan on supporting the following: `ssh-dss`, `diffie-hellman-group14-sha1`, `diffie-hellman-group1-sha1`, `hmac-sha1`, `hmac-sha1-96`. Algorithm support is subject to change in the future.
## Connecting with SFTP
The following clients have compatible algorithm support with SFTP for Azure Blob
- Workday - XFB.Gateway
-The supported client list above is not exhaustive and may change over time.
+The supported client list above isn't exhaustive and may change over time.
## Limitations and known issues
storage Storage Blob Scalable App Upload Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-upload-files.md
ms.devlang: csharp
# Upload large amounts of random data in parallel to Azure storage
-This tutorial is part two of a series. This tutorial shows you deploy an application that uploads large amount of random data to an Azure storage account.
+This tutorial is part two of a series. This tutorial shows you how to deploy an application that uploads large amount of random data to an Azure storage account.
In part two of the series, you learn how to:
storage Storage Blob Static Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website.md
Previously updated : 11/04/2021 Last updated : 09/29/2022
Users can view site content from a browser by using the public URL of the websit
The index document that you specify when you enable static website hosting appears when users open the site and don't specify a specific file (For example: `https://contosoblobaccount.z22.web.core.windows.net`).
-If the server returns a 404 error, and you have not specified an error document when you enabled the website, then a default 404 page is returned to the user.
+If the server returns a 404 error, and you haven't specified an error document when you enabled the website, then a default 404 page is returned to the user.
> [!NOTE] > [Cross-Origin Resource Sharing (CORS) support for Azure Storage](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) is not supported with static website. ### Secondary endpoints
-If you set up [redundancy in a secondary region](../common/storage-redundancy.md#redundancy-in-a-secondary-region), you can also access website content by using a secondary endpoint. Because data is replicated to secondary regions asynchronously, the files that are available at the secondary endpoint aren't always in sync with the files that are available on the primary endpoint.
+If you set up [redundancy in a secondary region](../common/storage-redundancy.md#redundancy-in-a-secondary-region), you can also access website content by using a secondary endpoint. Data is replicated to secondary regions asynchronously. Therefore, the files that are available at the secondary endpoint aren't always in sync with the files that are available on the primary endpoint.
## Impact of setting the access level on the web container
-You can modify the public access level of the **$web** container, but this has no impact on the primary static website endpoint because these files are served through anonymous access requests. That means public (read-only) access to all files.
+You can modify the public access level of the **$web** container, but making this modification has no impact on the primary static website endpoint because these files are served through anonymous access requests. That means public (read-only) access to all files.
The following screenshot shows the public access level setting in the Azure portal: ![Screenshot showing how to set public access level in the portal](./media/anonymous-read-access-configure/configure-public-access-container.png)
-While the primary static website endpoint is not affected, a change to the public access level does impact the primary blob service endpoint.
+While the primary static website endpoint isn't affected, a change to the public access level does impact the primary blob service endpoint.
For example, if you change the public access level of the **$web** container from **Private (no anonymous access)** to **Blob (anonymous read access for blobs only)**, then the level of public access to the primary static website endpoint `https://contosoblobaccount.z22.web.core.windows.net/https://docsupdatetracker.net/index.html` doesn't change. However, the public access to the primary blob service endpoint `https://contosoblobaccount.blob.core.windows.net/$web/https://docsupdatetracker.net/index.html` does change from private to public. Now users can open that file by using either of these two endpoints.
-Disabling public access on a storage account does not affect static websites that are hosted in that storage account. For more information, see [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).
+Disabling public access on a storage account doesn't affect static websites that are hosted in that storage account. For more information, see [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).
## Mapping a custom domain to a static website URL You can make your static website available via a custom domain.
-It's easier to enable HTTP access for your custom domain, because Azure Storage natively supports it. To enable HTTPS, you'll have to use Azure CDN because Azure Storage does not yet natively support HTTPS with custom domains. see [Map a custom domain to an Azure Blob Storage endpoint](storage-custom-domain-name.md) for step-by-step guidance.
+It's easier to enable HTTP access for your custom domain, because Azure Storage natively supports it. To enable HTTPS, you'll have to use Azure CDN because Azure Storage doesn't yet natively support HTTPS with custom domains. see [Map a custom domain to an Azure Blob Storage endpoint](storage-custom-domain-name.md) for step-by-step guidance.
If the storage account is configured to [require secure transfer](../common/storage-require-secure-transfer.md) over HTTPS, then users must use the HTTPS endpoint.
If you want to use headers to control caching, see [Control Azure CDN caching be
## Multi-region website hosting
-If you plan to host a website in multiple geographies, we recommend that you use a [Content Delivery Network](../../cdn/index.yml) for regional caching. Use [Azure Front Door](../../frontdoor/index.yml) if you want to serve different content in each region. It also provides failover capabilities. [Azure Traffic Manager](../../traffic-manager/index.yml) is not recommended if you plan to use a custom domain. Issues can arise because of how Azure Storage verifies custom domain names.
+If you plan to host a website in multiple geographies, we recommend that you use a [Content Delivery Network](../../cdn/index.yml) for regional caching. Use [Azure Front Door](../../frontdoor/index.yml) if you want to serve different content in each region. It also provides failover capabilities. [Azure Traffic Manager](../../traffic-manager/index.yml) isn't recommended if you plan to use a custom domain. Issues can arise because of how Azure Storage verifies custom domain names.
## Permissions
No. A static website only supports anonymous public read access for files in the
You can configure a [custom domain](./static-website-content-delivery-network.md) with a static website by using [Azure Content Delivery Network (Azure CDN)](./storage-custom-domain-name.md#map-a-custom-domain-with-https-enabled). Azure CDN provides consistent low latencies to your website from anywhere in the world.
-##### How do I use a custom SSL certificate with a static website?
+##### How do I use a custom Secure Sockets Layer (SSL) certificate with a static website?
You can configure a [custom SSL](./static-website-content-delivery-network.md) certificate with a static website by using [Azure CDN](./storage-custom-domain-name.md#map-a-custom-domain-with-https-enabled). Azure CDN provides consistent low latencies to your website from anywhere in the world.
You can configure the host header for a static website by using [Azure CDN - Ver
##### Why am I getting an HTTP 404 error from a static website?
-This can happen if you refer to a file name by using an incorrect case. For example: `Index.html` instead of `https://docsupdatetracker.net/index.html`. File names and extensions in the url of a static website are case-sensitive even though they're served over HTTP. This can also happen if your Azure CDN endpoint is not yet provisioned. Wait up to 90 minutes after you provision a new Azure CDN for the propagation to complete.
+A 404 error can happen if you refer to a file name by using an incorrect case. For example: `Index.html` instead of `https://docsupdatetracker.net/index.html`. File names and extensions in the url of a static website are case-sensitive even though they're served over HTTP. This can also happen if your Azure CDN endpoint isn't yet provisioned. Wait up to 90 minutes after you provision a new Azure CDN for the propagation to complete.
##### Why isn't the root directory of the website not redirecting to the default index page?
storage Storage Quickstart Blobs Dotnet Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet-legacy.md
- Title: "Quickstart: Azure Blob Storage client library for .NET"
-description: In this quickstart, you learn how to use the Azure Blob Storage client library for .NET to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.
-- Previously updated : 07/24/2020------
-# Quickstart: Azure Blob Storage client library v11 for .NET
-
-Get started with the Azure Blob Storage client library v11 for .NET. Azure Blob Storage is Microsoft's object storage solution for the cloud. Follow steps to install the package and try out example code for basic tasks. Blob storage is optimized for storing massive amounts of unstructured data.
-
-> [!NOTE]
-> This quickstart uses a legacy version of the Azure Blob Storage client library. To get started with the latest version, see [Quickstart: Azure Blob Storage client library v12 for .NET](storage-quickstart-blobs-dotnet.md).
-
-Use the Azure Blob Storage client library for .NET to:
--- Create a container-- Set permissions on a container-- Create a blob in Azure Storage-- Download the blob to your local computer-- List all of the blobs in a container-- Delete a container-
-Additional resources:
--- [API reference documentation](/dotnet/api/overview/azure/storage)-- [Library source code](https://github.com/Azure/azure-storage-net/tree/master/Blob)-- [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Storage.Blob/)-- [Samples](/samples/browse/?products=azure-blob-storage)-
-## Prerequisites
--- Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- Azure Storage account - [create a storage account](../common/storage-account-create.md)-- Current [.NET Core SDK](https://dotnet.microsoft.com/download/dotnet-core) for your operating system. Be sure to get the SDK and not the runtime.-
-## Setting up
-
-This section walks you through preparing a project to work with the Azure Blob Storage client library for .NET.
-
-### Create the project
-
-First, create a .NET Core application named *blob-quickstart*.
-
-1. In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name *blob-quickstart*. This command creates a simple "Hello World" C# project with a single source file: *Program.cs*.
-
- ```console
- dotnet new console -n blob-quickstart
- ```
-
-2. Switch to the newly created *blob-quickstart* folder and build the app to verify that all is well.
-
- ```console
- cd blob-quickstart
- ```
-
- ```console
- dotnet build
- ```
-
-The expected output from the build should look something like this:
-
-```output
-C:\QuickStarts\blob-quickstart> dotnet build
-Microsoft (R) Build Engine version 16.0.450+ga8dc7f1d34 for .NET Core
-Copyright (C) Microsoft Corporation. All rights reserved.
-
- Restore completed in 44.31 ms for C:\QuickStarts\blob-quickstart\blob-quickstart.csproj.
- blob-quickstart -> C:\QuickStarts\blob-quickstart\bin\Debug\netcoreapp2.1\blob-quickstart.dll
-
-Build succeeded.
- 0 Warning(s)
- 0 Error(s)
-
-Time Elapsed 00:00:03.08
-```
-
-### Install the package
-
-While still in the application directory, install the Azure Blob Storage client library for .NET package by using the `dotnet add package` command.
-
-```console
-dotnet add package Microsoft.Azure.Storage.Blob
-```
-
-### Set up the app framework
-
-From the project directory:
-
-1. Open the *Program.cs* file in your editor
-2. Remove the `Console.WriteLine` statement
-3. Add `using` directives
-4. Create a `ProcessAsync` method where the main code for the example will reside
-5. Asynchronously call the `ProcessAsync` method from `Main`
-
-Here's the code:
-
-```csharp
-using System;
-using System.IO;
-using System.Threading.Tasks;
-using Microsoft.Azure.Storage;
-using Microsoft.Azure.Storage.Blob;
-
-namespace blob_quickstart
-{
- class Program
- {
- public static async Task Main()
- {
- Console.WriteLine("Azure Blob Storage - .NET quickstart sample\n");
-
- await ProcessAsync();
-
- Console.WriteLine("Press any key to exit the sample application.");
- Console.ReadLine();
- }
-
- private static async Task ProcessAsync()
- {
- }
- }
-}
-```
-
-### Copy your credentials from the Azure portal
-
-When the sample application makes a request to Azure Storage, it must be authorized. To authorize a request, add your storage account credentials to the application as a connection string. View your storage account credentials by following these steps:
-
-1. Navigate to the [Azure portal](https://portal.azure.com).
-2. Locate your storage account.
-3. In the **Settings** section of the storage account overview, select **Access keys**. Here, you can view your account access keys and the complete connection string for each key.
-4. Find the **Connection string** value under **key1**, and select the **Copy** button to copy the connection string. You will add the connection string value to an environment variable in the next step.
-
- ![Screenshot showing how to copy a connection string from the Azure portal](../../../includes/media/storage-copy-connection-string-portal/portal-connection-string.png)
-
-### Configure your storage connection string
-
-After you have copied your connection string, write it to a new environment variable on the local machine running the application. To set the environment variable, open a console window, and follow the instructions for your operating system. Replace `<yourconnectionstring>` with your actual connection string.
-
-#### Windows
-
-```cmd
-setx AZURE_STORAGE_CONNECTION_STRING "<yourconnectionstring>"
-```
-
-After you add the environment variable in Windows, you must start a new instance of the command window.
-
-#### Linux
-
-```bash
-export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
-```
-
-#### MacOS
-
-```bash
-export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
-```
-
-After you add the environment variable, restart any running programs that will need to read the environment variable. For example, restart your development environment or editor before continuing.
-
-## Object model
-
-Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers three types of resources:
--- The storage account.-- A container in the storage account-- A blob in a container-
-The following diagram shows the relationship between these resources.
-
-![Diagram of Blob storage architecture](./media/storage-quickstart-blobs-dotnet/blob1.png)
-
-Use the following .NET classes to interact with these resources:
--- [CloudStorageAccount](/dotnet/api/microsoft.azure.storage.cloudstorageaccount): The `CloudStorageAccount` class represents your Azure storage account. Use this class to authorize access to Blob storage using your account access keys.-- [CloudBlobClient](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient): The `CloudBlobClient` class provides a point of access to the Blob service in your code.-- [CloudBlobContainer](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer): The `CloudBlobContainer` class represents a blob container in your code.-- [CloudBlockBlob](/dotnet/api/microsoft.azure.storage.blob.cloudblockblob): The `CloudBlockBlob` object represents a block blob in your code. Block blobs are made up of blocks of data that can be managed individually.-
-## Code examples
-
-These example code snippets show you how to perform the following with the Azure Blob Storage client library for .NET:
-
- - [Authenticate the client](#authenticate-the-client)
- - [Create a container](#create-a-container)
- - [Set permissions on a container](#set-permissions-on-a-container)
- - [Upload blobs to a container](#upload-blobs-to-a-container)
- - [List the blobs in a container](#list-the-blobs-in-a-container)
- - [Download blobs](#download-blobs)
- - [Delete a container](#delete-a-container)
-
-### Authenticate the client
-
-The code below checks that the environment variable contains a connection string that can be parsed to create a [CloudStorageAccount](/dotnet/api/microsoft.azure.storage.cloudstorageaccount) object pointing to the storage account. To check that the connection string is valid, use the [TryParse](/dotnet/api/microsoft.azure.storage.cloudstorageaccount.tryparse) method. If `TryParse` is successful, it initializes the `storageAccount` variable and returns `true`.
-
-Add this code inside the `ProcessAsync` method:
-
-```csharp
-// Retrieve the connection string for use with the application. The storage
-// connection string is stored in an environment variable on the machine
-// running the application called AZURE_STORAGE_CONNECTION_STRING. If the
-// environment variable is created after the application is launched in a
-// console or with Visual Studio, the shell or application needs to be closed
-// and reloaded to take the environment variable into account.
-string storageConnectionString = Environment.GetEnvironmentVariable("AZURE_STORAGE_CONNECTION_STRING");
-
-// Check whether the connection string can be parsed.
-CloudStorageAccount storageAccount;
-if (CloudStorageAccount.TryParse(storageConnectionString, out storageAccount))
-{
- // If the connection string is valid, proceed with operations against Blob
- // storage here.
- // ADD OTHER OPERATIONS HERE
-}
-else
-{
- // Otherwise, let the user know that they need to define the environment variable.
- Console.WriteLine(
- "A connection string has not been defined in the system environment variables. " +
- "Add an environment variable named 'AZURE_STORAGE_CONNECTION_STRING' with your storage " +
- "connection string as a value.");
- Console.WriteLine("Press any key to exit the application.");
- Console.ReadLine();
-}
-```
-
-> [!NOTE]
-> To perform the rest of the operations in this article, replace `// ADD OTHER OPERATIONS HERE` in the code above with the code snippets in the following sections.
-
-### Create a container
-
-To create the container, first create an instance of the [CloudBlobClient](/dotnet/api/microsoft.azure.storage.blob.cloudblobclient) object, which points to Blob storage in your storage account. Next, create an instance of the [CloudBlobContainer](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer) object, then create the container.
-
-In this case, the code calls the [CreateAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.createasync) method to create the container. A GUID value is appended to the container name to ensure that it is unique. In a production environment, it's often preferable to use the [CreateIfNotExistsAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.createifnotexistsasync) method to create a container only if it does not already exist.
-
-> [!IMPORTANT]
-> Container names must be lowercase. For more information about naming containers and blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-
-```csharp
-// Create the CloudBlobClient that represents the
-// Blob storage endpoint for the storage account.
-CloudBlobClient cloudBlobClient = storageAccount.CreateCloudBlobClient();
-
-// Create a container called 'quickstartblobs' and
-// append a GUID value to it to make the name unique.
-CloudBlobContainer cloudBlobContainer =
- cloudBlobClient.GetContainerReference("quickstartblobs" +
- Guid.NewGuid().ToString());
-await cloudBlobContainer.CreateAsync();
-```
-
-### Set permissions on a container
-
-Set permissions on the container so that any blobs in the container are public. If a blob is public, it can be accessed anonymously by any client.
-
-```csharp
-// Set the permissions so the blobs are public.
-BlobContainerPermissions permissions = new BlobContainerPermissions
-{
- PublicAccess = BlobContainerPublicAccessType.Blob
-};
-await cloudBlobContainer.SetPermissionsAsync(permissions);
-```
-
-### Upload blobs to a container
-
-The following code snippet gets a reference to a `CloudBlockBlob` object by calling the [GetBlockBlobReference](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.getblockblobreference) method on the container created in the previous section. It then uploads the selected local file to the blob by calling the [ΓÇïUploadΓÇïFromΓÇïFileAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblockblob.uploadfromfileasync) method. This method creates the blob if it doesn't already exist, and overwrites it if it does.
-
-```csharp
-// Create a file in your local MyDocuments folder to upload to a blob.
-string localPath = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);
-string localFileName = "QuickStart_" + Guid.NewGuid().ToString() + ".txt";
-string sourceFile = Path.Combine(localPath, localFileName);
-// Write text to the file.
-File.WriteAllText(sourceFile, "Hello, World!");
-
-Console.WriteLine("Temp file = {0}", sourceFile);
-Console.WriteLine("Uploading to Blob storage as blob '{0}'", localFileName);
-
-// Get a reference to the blob address, then upload the file to the blob.
-// Use the value of localFileName for the blob name.
-CloudBlockBlob cloudBlockBlob = cloudBlobContainer.GetBlockBlobReference(localFileName);
-await cloudBlockBlob.UploadFromFileAsync(sourceFile);
-```
-
-### List the blobs in a container
-
-List the blobs in the container by using the [ΓÇïListΓÇïBlobsSegmentedAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.listblobssegmentedasync) method. In this case, only one blob has been added to the container, so the listing operation returns just that one blob.
-
-If there are too many blobs to return in one call (by default, more than 5000), then the `ΓÇïListΓÇïBlobsSegmentedAsync` method returns a segment of the total result set and a continuation token. To retrieve the next segment of blobs, you provide in the continuation token returned by the previous call, and so on, until the continuation token is null. A null continuation token indicates that all of the blobs have been retrieved. The code shows how to use the continuation token for the sake of best practices.
-
-```csharp
-// List the blobs in the container.
-Console.WriteLine("List blobs in container.");
-BlobContinuationToken blobContinuationToken = null;
-do
-{
- var results = await cloudBlobContainer.ListBlobsSegmentedAsync(null, blobContinuationToken);
- // Get the value of the continuation token returned by the listing call.
- blobContinuationToken = results.ContinuationToken;
- foreach (IListBlobItem item in results.Results)
- {
- Console.WriteLine(item.Uri);
- }
-} while (blobContinuationToken != null); // Loop while the continuation token is not null.
-
-```
-
-### Download blobs
-
-Download the blob created previously to your local file system by using the [ΓÇïDownloadΓÇïToΓÇïFileAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblob.downloadtofileasync) method. The example code adds a suffix of "_DOWNLOADED" to the blob name so that you can see both files in local file system.
-
-```csharp
-// Download the blob to a local file, using the reference created earlier.
-// Append the string "_DOWNLOADED" before the .txt extension so that you
-// can see both files in MyDocuments.
-string destinationFile = sourceFile.Replace(".txt", "_DOWNLOADED.txt");
-Console.WriteLine("Downloading blob to {0}", destinationFile);
-await cloudBlockBlob.DownloadToFileAsync(destinationFile, FileMode.Create);
-```
-
-### Delete a container
-
-The following code cleans up the resources the app created by deleting the entire container using [CloudΓÇïBlobΓÇïContainer.ΓÇïDeleteAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.deleteasync). You can also delete the local files if you like.
-
-```csharp
-Console.WriteLine("Press the 'Enter' key to delete the example files, " +
- "example container, and exit the application.");
-Console.ReadLine();
-// Clean up resources. This includes the container and the two temp files.
-Console.WriteLine("Deleting the container");
-if (cloudBlobContainer != null)
-{
- await cloudBlobContainer.DeleteIfExistsAsync();
-}
-Console.WriteLine("Deleting the source, and downloaded files");
-File.Delete(sourceFile);
-File.Delete(destinationFile);
-```
-
-## Run the code
-
-This app creates a test file in your local *MyDocuments* folder and uploads it to Blob storage. The example then lists the blobs in the container and downloads the file with a new name so that you can compare the old and new files.
-
-Navigate to your application directory, then build and run the application.
-
-```console
-dotnet build
-```
-
-```console
-dotnet run
-```
-
-The output of the app is similar to the following example:
-
-```output
-Azure Blob Storage - .NET Quickstart example
-
-Created container 'quickstartblobs33c90d2a-eabd-4236-958b-5cc5949e731f'
-
-Temp file = C:\Users\myusername\Documents\QuickStart_c5e7f24f-a7f8-4926-a9da-96
-97c748f4db.txt
-Uploading to Blob storage as blob 'QuickStart_c5e7f24f-a7f8-4926-a9da-9697c748f
-4db.txt'
-
-Listing blobs in container.
-https://storagesamples.blob.core.windows.net/quickstartblobs33c90d2a-eabd-4236-
-958b-5cc5949e731f/QuickStart_c5e7f24f-a7f8-4926-a9da-9697c748f4db.txt
-
-Downloading blob to C:\Users\myusername\Documents\QuickStart_c5e7f24f-a7f8-4926
--a9da-9697c748f4db_DOWNLOADED.txt-
-Press any key to delete the example files and example container.
-```
-
-When you press the **Enter** key, the application deletes the storage container and the files. Before you delete them, check your *MyDocuments* folder for the two files. You can open them and observe that they are identical. Copy the blob's URL from the console window and paste it into a browser to view the contents of the blob.
-
-After you've verified the files, hit any key to finish the demo and delete the test files.
-
-## Next steps
-
-In this quickstart, you learned how to upload, download, and list blobs using .NET.
-
-To learn how to create a web app that uploads an image to Blob storage, continue to:
-
-> [!div class="nextstepaction"]
-> [Upload and process an image](storage-upload-process-images.md)
--- To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).-- To explore a sample application that you can deploy from Visual Studio for Windows, see the [.NET Photo Gallery Web Application Sample with Azure Blob Storage](https://azure.microsoft.com/resources/samples/storage-blobs-dotnet-webapp/).
storage Storage Quickstart Blobs Java Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java-legacy.md
- Title: "Quickstart: Azure Blob storage client library v8 for Java"
-description: Create a storage account and a container in object (Blob) storage. Then use the Azure Storage client library v8 for Java to upload a blob to Azure Storage, download a blob, and list the blobs in a container.
--- Previously updated : 01/19/2021-----
-# Quickstart: Manage blobs with Java v8 SDK
-
-In this quickstart, you learn to manage blobs by using Java. Blobs are objects that can hold large amounts of text or binary data, including images, documents, streaming media, and archive data. You'll upload, download, and list blobs. You'll also create, set permissions on, and delete containers.
-
-> [!NOTE]
-> This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see [Quickstart: Manage blobs with Java v12 SDK](storage-quickstart-blobs-java.md).
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An Azure Storage account. [Create a storage account](../common/storage-account-create.md).-- An IDE that has Maven integration. This guide uses [Eclipse](https://www.eclipse.org/downloads/) with the "Eclipse IDE for Java Developers" configuration.-
-## Download the sample application
-
-The [sample application](https://github.com/Azure-Samples/storage-blobs-java-quickstart) is a basic console application.
-
-Use [git](https://git-scm.com/) to download a copy of the application to your development environment.
-
-```bash
-git clone https://github.com/Azure-Samples/storage-blobs-java-quickstart.git
-```
-
-This command clones the repository to your local git folder. To open the project, launch Eclipse and close the Welcome screen. Select **File** then **Open Projects from File System**. Make sure **Detect and configure project natures** is checked. Select **Directory** then navigate to where you stored the cloned repository. Inside the cloned repository, select the **blobAzureApp** folder. Make sure the **blobAzureApp** project appears as an Eclipse project, then select **Finish**.
-
-Once the project completes importing, open **AzureApp.java** (located in **blobQuickstart.blobAzureApp** inside of **src/main/java**), and replace the `accountname` and `accountkey` inside of the `storageConnectionString` string. Then run the application. Specific instructions for completing these tasks are described in the following sections.
--
-## Configure your storage connection string
-
-In the application, you must provide the connection string for your storage account. Open the **AzureApp.Java** file. Find the `storageConnectionString` variable and paste the connection string value that you copied in the previous section. Your `storageConnectionString` variable should look similar to the following code example:
-
-```java
-public static final String storageConnectionString =
-"DefaultEndpointsProtocol=https;" +
-"AccountName=<account-name>;" +
-"AccountKey=<account-key>";
-```
-
-## Run the sample
-
-This sample application creates a test file in your default directory (*C:\Users\<user>\AppData\Local\Temp*, for Windows users), uploads it to Blob storage, lists the blobs in the container, then downloads the file with a new name so you can compare the old and new files.
-
-Run the sample using Maven at the command line. Open a shell and navigate to **blobAzureApp** inside of your cloned directory. Then enter `mvn compile exec:java`.
-
-The following example shows the output if you were to run the application on Windows.
-
-```
-Azure Blob storage quick start sample
-Creating container: quickstartcontainer
-Creating a sample file at: C:\Users\<user>\AppData\Local\Temp\sampleFile514658495642546986.txt
-Uploading the sample file
-URI of blob is: https://myexamplesacct.blob.core.windows.net/quickstartcontainer/sampleFile514658495642546986.txt
-The program has completed successfully.
-Press the 'Enter' key while in the console to delete the sample files, example container, and exit the application.
-
-Deleting the container
-Deleting the source, and downloaded files
-```
-
-Before you continue, check your default directory (*C:\Users\<user>\AppData\Local\Temp*, for Windows users) for the sample file. Copy the URL for the blob out of the console window and paste it into a browser to view the contents of the file in Blob storage. If you compare the sample file in your directory with the contents stored in Blob storage, you will see that they are the same.
-
- > [!NOTE]
- > You can also use a tool such as the [Azure Storage Explorer](https://storageexplorer.com/?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) to view the files in Blob storage. Azure Storage Explorer is a free cross-platform tool that allows you to access your storage account information.
-
-After you've verified the files, press the **Enter** key to complete the demo and delete the test files. Now that you know what the sample does, open the **AzureApp.java** file to look at the code.
-
-## Understand the sample code
-
-Next, we walk through the sample code so that you can understand how it works.
-
-### Get references to the storage objects
-
-The first thing to do is create the references to the objects used to access and manage Blob storage. These objects build on each other -- each is used by the next one in the list.
--- Create an instance of the [CloudStorageAccount](/java/api/com.microsoft.azure.management.storage.storageaccount) object pointing to the storage account.-
- The **CloudStorageAccount** object is a representation of your storage account and it allows you to set and access storage account properties programmatically. Using the **CloudStorageAccount** object you can create an instance of the **CloudBlobClient**, which is necessary to access the blob service.
--- Create an instance of the **CloudBlobClient** object, which points to the [Blob service](/java/api/com.microsoft.azure.storage.blob.cloudblobclient) in your storage account.-
- The **CloudBlobClient** provides you a point of access to the blob service, allowing you to set and access Blob storage properties programmatically. Using the **CloudBlobClient** you can create an instance of the **CloudBlobContainer** object, which is necessary to create containers.
--- Create an instance of the [CloudBlobContainer](/java/api/com.microsoft.azure.storage.blob.cloudblobcontainer) object, which represents the container you are accessing. Use containers to organize your blobs like you use folders on your computer to organize your files.-
- Once you have the **CloudBlobContainer**, you can create an instance of the [CloudBlockBlob](/java/api/com.microsoft.azure.storage.blob.cloudblockblob) object that points to the specific blob you're interested in, and perform an upload, download, copy, or other operation.
-
-> [!IMPORTANT]
-> Container names must be lowercase. For more information about containers, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-
-### Create a container
-
-In this section, you create an instance of the objects, create a new container, and then set permissions on the container so the blobs are public and can be accessed with just a URL. The container is called **quickstartcontainer**.
-
-This example uses [CreateIfNotExists](/java/api/com.microsoft.azure.storage.blob.cloudblobcontainer.createifnotexists) because we want to create a new container each time the sample is run. In a production environment, where you use the same container throughout an application, it's better practice to only call **CreateIfNotExists** once. Alternatively, you can create the container ahead of time so you don't need to create it in the code.
-
-```java
-// Parse the connection string and create a blob client to interact with Blob storage
-storageAccount = CloudStorageAccount.parse(storageConnectionString);
-blobClient = storageAccount.createCloudBlobClient();
-container = blobClient.getContainerReference("quickstartcontainer");
-
-// Create the container if it does not exist with public access.
-System.out.println("Creating container: " + container.getName());
-container.createIfNotExists(BlobContainerPublicAccessType.CONTAINER, new BlobRequestOptions(), new OperationContext());
-```
-
-### Upload blobs to the container
-
-To upload a file to a block blob, get a reference to the blob in the target container. Once you have the blob reference, you can upload data to it by using [CloudBlockBlob.Upload](/java/api/com.microsoft.azure.storage.blob.cloudblockblob.upload). This operation creates the blob if it doesn't already exist, or overwrites the blob if it already exists.
-
-The sample code creates a local file to be used for the upload and download, storing the file to be uploaded as **source** and the name of the blob in **blob**. The following example uploads the file to your container called **quickstartcontainer**.
-
-```java
-//Creating a sample file
-sourceFile = File.createTempFile("sampleFile", ".txt");
-System.out.println("Creating a sample file at: " + sourceFile.toString());
-Writer output = new BufferedWriter(new FileWriter(sourceFile));
-output.write("Hello Azure!");
-output.close();
-
-//Getting a blob reference
-CloudBlockBlob blob = container.getBlockBlobReference(sourceFile.getName());
-
-//Creating blob and uploading file to it
-System.out.println("Uploading the sample file ");
-blob.uploadFromFile(sourceFile.getAbsolutePath());
-```
-
-There are several `upload` methods including [upload](/java/api/com.microsoft.azure.storage.blob.cloudblockblob.upload), [uploadBlock](/java/api/com.microsoft.azure.storage.blob.cloudblockblob.uploadblock), [uploadFullBlob](/java/api/com.microsoft.azure.storage.blob.cloudblockblob.uploadfullblob), [uploadStandardBlobTier](/java/api/com.microsoft.azure.storage.blob.cloudblockblob.uploadstandardblobtier), and [uploadText](/java/api/com.microsoft.azure.storage.blob.cloudblockblob.uploadtext) which you can use with Blob storage. For example, if you have a string, you can use the `UploadText` method rather than the `Upload` method.
-
-Block blobs can be any type of text or binary file. Page blobs are primarily used for the VHD files that back IaaS VMs. Use append blobs for logging, such as when you want to write to a file and then keep adding more information. Most objects stored in Blob storage are block blobs.
-
-### List the blobs in a container
-
-You can get a list of files in the container using [CloudBlobContainer.ListBlobs](/java/api/com.microsoft.azure.storage.blob.cloudblobcontainer.listblobs). The following code retrieves the list of blobs, then loops through them, showing the URIs of the blobs found. You can copy the URI from the command window and paste it into a browser to view the file.
-
-```java
-//Listing contents of container
-for (ListBlobItem blobItem : container.listBlobs()) {
- System.out.println("URI of blob is: " + blobItem.getUri());
-}
-```
-
-### Download blobs
-
-Download blobs to your local disk using [CloudBlob.DownloadToFile](/java/api/com.microsoft.azure.storage.blob.cloudblob.downloadtofile).
-
-The following code downloads the blob uploaded in a previous section, adding a suffix of "_DOWNLOADED" to the blob name so you can see both files on local disk.
-
-```java
-// Download blob. In most cases, you would have to retrieve the reference
-// to cloudBlockBlob here. However, we created that reference earlier, and
-// haven't changed the blob we're interested in, so we can reuse it.
-// Here we are creating a new file to download to. Alternatively you can also pass in the path as a string into downloadToFile method: blob.downloadToFile("/path/to/new/file").
-downloadedFile = new File(sourceFile.getParentFile(), "downloadedFile.txt");
-blob.downloadToFile(downloadedFile.getAbsolutePath());
-```
-
-### Clean up resources
-
-If you no longer need the blobs that you have uploaded, you can delete the entire container using [CloudBlobContainer.DeleteIfExists](/java/api/com.microsoft.azure.storage.blob.cloudblobcontainer.deleteifexists). This method also deletes the files in the container.
-
-```java
-try {
-if(container != null)
- container.deleteIfExists();
-} catch (StorageException ex) {
-System.out.println(String.format("Service error. Http code: %d and error code: %s", ex.getHttpStatusCode(), ex.getErrorCode()));
-}
-
-System.out.println("Deleting the source, and downloaded files");
-
-if(downloadedFile != null)
-downloadedFile.deleteOnExit();
-
-if(sourceFile != null)
-sourceFile.deleteOnExit();
-```
-
-## Next steps
-
-In this article, you learned how to transfer files between a local disk and Azure Blob storage using Java. To learn more about working with Java, continue to our GitHub source code repository.
-
-> [!div class="nextstepaction"]
-> [Java API Reference](/java/api/overview/azure/storage?view=azure-java-legacy&preserve-view=true)
-> [Code Samples for Java](../common/storage-samples-java.md)
storage Storage Quickstart Blobs Python Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python-legacy.md
- Title: 'Quickstart: Azure Blob storage client library v2.1 for Python'
-description: In this quickstart, you create a storage account and a container in object (Blob) storage. Then you use the storage client library v2.1 for Python to upload a blob to Azure Storage, download a blob, and list the blobs in a container.
-- Previously updated : 07/24/2020------
-# Quickstart: Manage blobs with Python v2.1 SDK
-
-In this quickstart, you learn to manage blobs by using Python. Blobs are objects that can hold large amounts of text or binary data, including images, documents, streaming media, and archive data. You'll upload, download, and list blobs, and you'll create and delete containers.
-
-> [!NOTE]
-> This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see [Quickstart: Manage blobs with Python v12 SDK](storage-quickstart-blobs-python.md).
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An Azure Storage account. [Create a storage account](../common/storage-account-create.md).-- [Python](https://www.python.org/downloads/).-- [Azure Storage SDK for Python](https://github.com/Azure/azure-sdk-for-python).-
-## Download the sample application
-
-The [sample application](https://github.com/Azure-Samples/storage-blobs-python-quickstart.git) in this quickstart is a basic Python application.
-
-Use the following [git](https://git-scm.com/) command to download the application to your development environment.
-
-```bash
-git clone https://github.com/Azure-Samples/storage-blobs-python-quickstart.git
-```
-
-To review the Python program, open the *example.py* file at the root of the repository.
--
-## Configure your storage connection string
-
-In the application, provide your storage account name and account key to create a `BlockBlobService` object.
-
-1. Open the *example.py* file from the Solution Explorer in your IDE.
-
-1. Replace the `accountname` and `accountkey` values with your storage account name and key:
-
- ```python
- block_blob_service = BlockBlobService(
- account_name='accountname', account_key='accountkey')
- ```
-
-1. Save and close the file.
-
-## Run the sample
-
-The sample program creates a test file in your *Documents* folder, uploads the file to Blob storage, lists the blobs in the file, and downloads the file with a new name.
-
-1. Install the dependencies:
-
- ```console
- pip install azure-storage-blob==2.1.0
- ```
-
-1. Go to the sample application:
-
- ```console
- cd storage-blobs-python-quickstart
- ```
-
-1. Run the sample:
-
- ```console
- python example.py
- ```
-
- You'll see messages similar to the following output:
-
- ```output
- Temp file = C:\Users\azureuser\Documents\QuickStart_9f4ed0f9-22d3-43e1-98d0-8b2c05c01078.txt
-
- Uploading to Blob storage as blobQuickStart_9f4ed0f9-22d3-43e1-98d0-8b2c05c01078.txt
-
- List blobs in the container
- Blob name: QuickStart_9f4ed0f9-22d3-43e1-98d0-8b2c05c01078.txt
-
- Downloading blob to C:\Users\azureuser\Documents\QuickStart_9f4ed0f9-22d3-43e1-98d0-8b2c05c01078_DOWNLOADED.txt
- ```
-
-1. Before you continue, go to your *Documents* folder and check for the two files.
-
- - *QuickStart_\<universally-unique-identifier\>*
- - *QuickStart_\<universally-unique-identifier\>_DOWNLOADED*
-
-1. You can open them and see they're the same.
-
- You can also use a tool like the [Azure Storage Explorer](https://storageexplorer.com). It's good for viewing the files in Blob storage. Azure Storage Explorer is a free cross-platform tool that lets you access your storage account info.
-
-1. After you've looked at the files, press any key to finish the sample and delete the test files.
-
-## Learn about the sample code
-
-Now that you know what the sample does, open the *example.py* file to look at the code.
-
-### Get references to the storage objects
-
-In this section, you instantiate the objects, create a new container, and then set permissions on the container so the blobs are public. You'll call the container `quickstartblobs`.
-
-```python
-# Create the BlockBlockService that the system uses to call the Blob service for the storage account.
-block_blob_service = BlockBlobService(
- account_name='accountname', account_key='accountkey')
-
-# Create a container called 'quickstartblobs'.
-container_name = 'quickstartblobs'
-block_blob_service.create_container(container_name)
-
-# Set the permission so the blobs are public.
-block_blob_service.set_container_acl(
- container_name, public_access=PublicAccess.Container)
-```
-
-First, you create the references to the objects used to access and manage Blob storage. These objects build on each other, and each is used by the next one in the list.
--- Instantiate the **BlockBlobService** object, which points to the Blob service in your storage account.--- Instantiate the **CloudBlobContainer** object, which represents the container you're accessing. The system uses containers to organize your blobs like you use folders on your computer to organize your files.-
-Once you have the Cloud Blob container, instantiate the **CloudBlockBlob** object that points to the specific blob that you're interested in. You can then upload, download, and copy the blob as you need.
-
-> [!IMPORTANT]
-> Container names must be lowercase. For more information about container and blob names, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-
-### Upload blobs to the container
-
-Blob storage supports block blobs, append blobs, and page blobs. Block blobs can be as large as 4.7 TB, and can be anything from Excel spreadsheets to large video files. You can use append blobs for logging when you want to write to a file and then keep adding more information. Page blobs are primarily used for the Virtual Hard Disk (VHD) files that back infrastructure as a service virtual machines (IaaS VMs). Block blobs are the most commonly used. This quickstart uses block blobs.
-
-To upload a file to a blob, get the full file path by joining the directory name with the file name on your local drive. You can then upload the file to the specified path using the `create_blob_from_path` method.
-
-The sample code creates a local file the system uses for the upload and download, storing the file the system uploads as *full_path_to_file* and the name of the blob as *local_file_name*. This example uploads the file to your container called `quickstartblobs`:
-
-```python
-# Create a file in Documents to test the upload and download.
-local_path = os.path.expanduser("~\Documents")
-local_file_name = "QuickStart_" + str(uuid.uuid4()) + ".txt"
-full_path_to_file = os.path.join(local_path, local_file_name)
-
-# Write text to the file.
-file = open(full_path_to_file, 'w')
-file.write("Hello, World!")
-file.close()
-
-print("Temp file = " + full_path_to_file)
-print("\nUploading to Blob storage as blob" + local_file_name)
-
-# Upload the created file, use local_file_name for the blob name.
-block_blob_service.create_blob_from_path(
- container_name, local_file_name, full_path_to_file)
-```
-
-There are several upload methods that you can use with Blob storage. For example, if you have a memory stream, you can use the `create_blob_from_stream` method rather than `create_blob_from_path`.
-
-### List the blobs in a container
-
-The following code creates a `generator` for the `list_blobs` method. The code loops through the list of blobs in the container and prints their names to the console.
-
-```python
-# List the blobs in the container.
-print("\nList blobs in the container")
-generator = block_blob_service.list_blobs(container_name)
-for blob in generator:
- print("\t Blob name: " + blob.name)
-```
-
-### Download the blobs
-
-Download blobs to your local disk using the `get_blob_to_path` method.
-The following code downloads the blob you uploaded previously. The system appends *_DOWNLOADED* to the blob name so you can see both files on your local disk.
-
-```python
-# Download the blob(s).
-# Add '_DOWNLOADED' as prefix to '.txt' so you can see both files in Documents.
-full_path_to_file2 = os.path.join(local_path, local_file_name.replace(
- '.txt', '_DOWNLOADED.txt'))
-print("\nDownloading blob to " + full_path_to_file2)
-block_blob_service.get_blob_to_path(
- container_name, local_file_name, full_path_to_file2)
-```
-
-### Clean up resources
-
-If you no longer need the blobs uploaded in this quickstart, you can delete the entire container using the `delete_container` method. To delete individual files instead, use the `delete_blob` method.
-
-```python
-# Clean up resources. This includes the container and the temp files.
-block_blob_service.delete_container(container_name)
-os.remove(full_path_to_file)
-os.remove(full_path_to_file2)
-```
-
-## Resources for developing Python applications with blobs
-
-For more about Python development with Blob storage, see these additional resources:
-
-### Binaries and source code
--- View, download, and install the [Python client library source code](https://github.com/Azure/azure-storage-python) for Azure Storage on GitHub.-
-### Client library reference and samples
--- For more about the Python client library, see the [Azure Storage libraries for Python](/python/api/overview/azure/storage).-- Explore [Blob storage samples](https://azure.microsoft.com/resources/samples/?sort=0&service=storage&platform=python&term=blob) written using the Python client library.-
-## Next steps
-
-In this quickstart, you learned how to transfer files between a local disk and Azure Blob storage using Python.
-
-For more about the Storage Explorer and Blobs, see [Manage Azure Blob storage resources with Storage Explorer](../../vs-azure-tools-storage-explorer-blobs.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
storage Storage Quickstart Blobs Xamarin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-xamarin.md
- Title: "Quickstart: Use Xamarin to manage blobs with Azure Blob Storage library v12"
-description: Learn to use Xamarin with the Azure Blob Storage client library v12 to create a container, upload or download a blob, list blobs, and delete a container.
-- Previously updated : 05/09/2022------
-# Quickstart: Use Azure Blob Storage client library v12 with Xamarin
-
-This quickstart gets you started using Xamarin with the Azure Blob Storage client library v12. The Xamarin mobile development framework creates C# apps for iOS, Android, and UWP from one .NET codebase.
-
-Blob Storage is optimized for storing massive amounts of unstructured data, like text or binary data, that doesn't fit a particular data model or definition. Blob Storage has three types of resources: a storage account, containers in the storage account, and blobs in the containers.
-
-The following diagram shows the relationship between these types of resources:
-
-![Diagram of Blob Storage architecture.](./media/storage-blobs-introduction/blob1.png)
-
-You can use the following .NET classes to interact with Blob Storage resources:
--- [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) manipulates Storage resources and blob containers.-- [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) manipulates Storage containers and their blobs.-- [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) manipulates Storage blobs.-- [BlobDownloadInfo](/dotnet/api/azure.storage.blobs.models.blobdownloadinfo) represents the properties and content returned from downloading a blob.-
-In this quickstart, you use Xamarin with the Azure Blob Storage client library v12 to:
--- [Create a container](#create-a-container)-- [Upload blobs to a container](#upload-blobs-to-a-container)-- [List the blobs in a container](#list-the-blobs-in-a-container)-- [Download blobs](#download-blobs)-- [Delete a container](#delete-a-container)-
-## Prerequisites
--- Azure subscription. [Create one for free](https://azure.microsoft.com/free).-- Azure Storage account. [Create a storage account](../common/storage-account-create.md).-- Visual Studio with the [Mobile Development for .NET](/xamarin/get-started/installation/?pivots=windows) workload installed, or [Visual Studio for Mac](/visualstudio/mac/installation?view=vsmac-2019&preserve-view=true)-
-## Visual Studio setup
-
-This section walks through preparing a Visual Studio Xamarin project to work with the Azure Blob Storage client library v12.
-
-1. In Visual Studio, create a Blank Forms App named *BlobQuickstartV12*.
-1. In Visual Studio **Solution Explorer**, right-click the solution and select **Manage NuGet Packages for Solution**.
-1. Search for **Azure.Storage.Blobs**, and install the latest stable version into all projects in your solution.
-1. In **Solution Explorer**, from the **BlobQuickstartV12** directory, open the *MainPage.xaml* file for editing.
-1. In the code editor, replace everything between the `<ContentPage></ContentPage>` elements with the following code:
-
- ```xaml
- <StackLayout HorizontalOptions="Center" VerticalOptions="Center">
-
- <Button x:Name="uploadButton" Text="Upload Blob" Clicked="Upload_Clicked" IsEnabled="False"/>
- <Button x:Name="listButton" Text="List Blobs" Clicked="List_Clicked" IsEnabled="False" />
- <Button x:Name="downloadButton" Text="Download Blob" Clicked="Download_Clicked" IsEnabled="False" />
- <Button x:Name="deleteButton" Text="Delete Container" Clicked="Delete_Clicked" IsEnabled="False" />
-
- <Label Text="" x:Name="resultsLabel" HorizontalTextAlignment="Center" Margin="0,20,0,0" TextColor="Red" />
-
- </StackLayout>
- ```
-
-## Azure Storage connection
-
-To authorize requests to Azure Storage, you need to add your storage account credentials to your application as a connection string.
--
-## Code examples
-
-The following example code snippets show you how to use the Blob Storage client library for .NET in a Xamarin.Forms app.
-
-### Create class level variables
-
-The following code declares several class-level variables that the samples use to communicate with Blob Storage. Add these lines to *MainPage.xaml.cs*, immediately after the storage account connection string you just added.
-
-```csharp
-string fileName = $"{Guid.NewGuid()}-temp.txt";
-
-BlobServiceClient client;
-BlobContainerClient containerClient;
-BlobClient blobClient;
-```
-
-### Create a container
-
-The code creates an instance of the [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) class, then calls the [CreateBlobContainerAsync](/dotnet/api/azure.storage.blobs.blobserviceclient.createblobcontainerasync) method to create the container in your storage account.
-
-The code appends a GUID value to the container name to ensure that it's unique. For more information about naming containers and blobs, see [Name and reference containers, blobs, and metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-
-Add the following code to the *MainPage.xaml.cs* file:
-
-```csharp
-protected async override void OnAppearing()
-{
- string containerName = $"quickstartblobs{Guid.NewGuid()}";
-
- client = new BlobServiceClient(storageConnectionString);
- containerClient = await client.CreateBlobContainerAsync(containerName);
-
- resultsLabel.Text = "Container Created\n";
-
- blobClient = containerClient.GetBlobClient(fileName);
-
- uploadButton.IsEnabled = true;
-}
-```
-
-### Upload blobs to a container
-
-The following code snippet:
-
-1. Creates a `MemoryStream` of the text.
-1. Uploads the text to a blob by calling the [UploadAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.uploadblobasync#Azure_Storage_Blobs_BlobContainerClient_UploadBlobAsync_System_String_System_IO_Stream_System_Threading_CancellationToken_) function of the [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) class. The code passes in both the filename and the `MemoryStream` of text. This method creates the blob if it doesn't already exist, and overwrites it if it does.
-
-Add the following code to the *MainPage.xaml.cs* file:
-
-```csharp
-async void Upload_Clicked(object sender, EventArgs e)
-{
- using MemoryStream memoryStream = new MemoryStream(Encoding.UTF8.GetBytes("Hello World!"));
-
- await containerClient.UploadBlobAsync(fileName, memoryStream);
-
- resultsLabel.Text += "Blob Uploaded\n";
-
- uploadButton.IsEnabled = false;
- listButton.IsEnabled = true;
-}
-```
-
-### List the blobs in a container
-
-This code lists the blobs in the container by calling the [GetBlobsAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobsasync) method. You added only one blob to the container, so the listing operation returns just one blob.
-
-Add this code to the *MainPage.xaml.cs* file:
-
-```csharp
-async void List_Clicked(object sender, EventArgs e)
-{
- await foreach (BlobItem blobItem in containerClient.GetBlobsAsync())
- {
- resultsLabel.Text += blobItem.Name + "\n";
- }
-
- listButton.IsEnabled = false;
- downloadButton.IsEnabled = true;
-}
-```
-
-### Download blobs
-
-Download the blob you previously created by calling the [DownloadToAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadtoasync) method. The example code copies the `Stream` representation of the blob into a `MemoryStream` and then into a `StreamReader` to display the text.
-
-Add this code to the *MainPage.xaml.cs* file:
-
-```csharp
-async void Download_Clicked(object sender, EventArgs e)
-{
- BlobDownloadInfo downloadInfo = await blobClient.DownloadAsync();
-
- using MemoryStream memoryStream = new MemoryStream();
-
- await downloadInfo.Content.CopyToAsync(memoryStream);
- memoryStream.Position = 0;
-
- using StreamReader streamReader = new StreamReader(memoryStream);
-
- resultsLabel.Text += "Blob Contents: \n";
- resultsLabel.Text += await streamReader.ReadToEndAsync();
- resultsLabel.Text += "\n";
-
- downloadButton.IsEnabled = false;
- deleteButton.IsEnabled = true;
-}
-```
-
-### Delete a container
-
-The following code deletes the container and its blobs, by using [DeleteAsync](/dotnet/api/microsoft.azure.storage.blob.cloudblobcontainer.deleteasync).
-
-The app first prompts you to confirm the blob and container deletion. You can then verify that the resources were created correctly, before you delete them.
-
-Add this code to the *MainPage.xaml.cs* file:
-
-```csharp
-async void Delete_Clicked(object sender, EventArgs e)
-{
- var deleteContainer = await Application.Current.MainPage.DisplayAlert("Delete Container",
- "You're about to delete the container. Proceed?", "OK", "Cancel");
-
- if (deleteContainer == false)
- return;
-
- await containerClient.DeleteAsync();
-
- resultsLabel.Text += "Container Deleted";
-
- deleteButton.IsEnabled = false;
-}
-```
-
-## Run the code
-
-After you add all the code, to run the app on Windows press F5. To run the app on Mac, press Cmd+Enter. When the app starts, it first creates the container. You can then select the buttons to upload, list, and download the blobs, and delete the container.
-
-The app writes to the screen after every operation, with output similar to the following example:
-
-```output
-Container Created
-Blob Uploaded
-98d9a472-8e98-4978-ba4f-081d69d2e6f8-temp.txt
-Blob Contents:
-Hello World!
-Container Deleted
-```
-
-Before you begin the clean-up process, verify that the output of the blob's contents match the blob that you uploaded. After you verify the values, confirm the container deletion to finish the quickstart.
-
-## Next steps
-
-In this quickstart, you learned how to use Xamarin to create and delete containers, and upload, download, and list blobs, with the Azure Blob Storage client library v12.
-
-To see Blob storage sample apps, continue to:
-
-> [!div class="nextstepaction"]
-> [Azure Blob Storage SDK v12 Xamarin sample](https://github.com/Azure-Samples/storage-blobs-xamarin-quickstart)
--- For tutorials, samples, quick starts and other documentation, visit [Azure for mobile developers](/azure/mobile-apps).-- To learn more about Xamarin, see [Get started with Xamarin](/xamarin/get-started/).-
-Azure.Storage.Blobs reference links:
--- [API reference documentation](/dotnet/api/azure.storage.blobs)-- [Client library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs)-- [NuGet package](https://www.nuget.org/packages/Azure.Storage.Blobs)-
storage Storage Use Azcopy Blobs Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-copy.md
description: This article contains a collection of AzCopy example commands that
Previously updated : 06/13/2022 Last updated : 09/29/2022
See the [Get started with AzCopy](storage-use-azcopy-v10.md) article to download
> [!NOTE] > The examples in this article assume that you've provided authorization credentials by using Azure Active Directory (Azure AD) and that your Azure AD identity has the proper role assignments for both source and destination accounts. >
-> Alternatively, you can append a SAS token to either the source or destination URL in each AzCopy command. For example: `azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>'`.
+> Alternatively you can append a SAS token to either the source or destination URL in each AzCopy command. For example: `azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>'`.
## Guidelines
Apply the following guidelines to your AzCopy commands.
- If you copy to a premium block blob storage account, omit the access tier of a blob from the copy operation by setting the `s2s-preserve-access-tier` to `false` (For example: `--s2s-preserve-access-tier=false`). Premium block blob storage accounts don't support access tiers. -- If you copy to or from an account that has a hierarchical namespace, use `blob.core.windows.net` instead of `dfs.core.windows.net` in the URL syntax. [Multi-protocol access on Data Lake Storage](../blobs/data-lake-storage-multi-protocol-access.md) enables you to use `blob.core.windows.net`, and it is the only supported syntax for account to account copy scenarios.
+- If you copy to or from an account that has a hierarchical namespace, use `blob.core.windows.net` instead of `dfs.core.windows.net` in the URL syntax. [Multi-protocol access on Data Lake Storage](../blobs/data-lake-storage-multi-protocol-access.md) enables you to use `blob.core.windows.net`, and it's the only supported syntax for account to account copy scenarios.
- You can increase the throughput of copy operations by setting the value of the `AZCOPY_CONCURRENCY_VALUE` environment variable. To learn more, see [Increase Concurrency](storage-use-azcopy-optimize.md#increase-concurrency).
Copy a directory to another storage account by using the [azcopy copy](storage-r
azcopy copy 'https://mysourceaccount.blob.core.windows.net/mycontainer/myBlobDirectory' 'https://mydestinationaccount.blob.core.windows.net/mycontainer' --recursive ```
-The copy operation is synchronous so when the command returns, that indicates that all files have been copied.
+The copy operation is synchronous. All files have been copied when the command returns.
## Copy a container
Copy a container to another storage account by using the [azcopy copy](storage-r
azcopy copy 'https://mysourceaccount.blob.core.windows.net/mycontainer' 'https://mydestinationaccount.blob.core.windows.net/mycontainer' --recursive ```
-The copy operation is synchronous so when the command returns, that indicates that all files have been copied.
+The copy operation is synchronous. All files have been copied when the command returns.
## Copy containers, directories, and blobs
The copy operation is synchronous so when the command returns, that indicates th
Copy blobs to another storage account and add [blob index tags(preview)](../blobs/storage-manage-find-blobs.md) to the target blob.
-If you're using Azure AD authorization, your security principal must be assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role or it must be given permission to the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftstorage) via a custom Azure role. If you're using a Shared Access Signature (SAS) token, that token must provide access to the blob's tags via the `t` SAS permission.
+If you're using Azure AD authorization, your security principal must be assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role, or it must be given permission to the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftstorage) via a custom Azure role. If you're using a Shared Access Signature (SAS) token, that token must provide access to the blob's tags via the `t` SAS permission.
To add tags, use the `--blob-tags` option along with a URL encoded key-value pair.
azcopy copy 'https://mysourceaccount.blob.core.windows.net/mycontainer' 'https:/
azcopy copy 'https://mysourceaccount.blob.core.windows.net/' 'https://mydestinationaccount.blob.core.windows.net' --recursive --blob-tags='my%20tag=my%20tag%20value&my%20second%20tag=my%20second%20tag%20value' ```
-The copy operation is synchronous so when the command returns, that indicates that all files have been copied.
+The copy operation is synchronous. All files have been copied when the command returns.
> [!NOTE] > If you specify a directory, container, or account for the source, all the blobs that are copied to the destination will have the same tags that you specify in the command.
storage Storage Use Azcopy Blobs Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-upload.md
description: This article contains a collection of AzCopy example commands that
Previously updated : 04/02/2021 Last updated : 09/22/2022
For detailed reference, see the [azcopy copy](storage-ref-azcopy-copy.md) refere
You can upload a file and add [blob index tags(preview)](../blobs/storage-manage-find-blobs.md) to the target blob.
-If you're using Azure AD authorization, your security principal must be assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role or it must be given permission to the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftstorage) via a custom Azure role. If you're using a Shared Access Signature (SAS) token, that token must provide access to the blob's tags via the `t` SAS permission.
+If you're using Azure AD authorization, your security principal must be assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role, or it must be given permission to the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftstorage) via a custom Azure role. If you're using a Shared Access Signature (SAS) token, that token must provide access to the blob's tags via the `t` SAS permission.
To add tags, use the `--blob-tags` option along with a URL encoded key-value pair. For example, to add the key `my tag` and a value `my tag value`, you would add `--blob-tags='my%20tag=my%20tag%20value'` to the destination parameter.
storage Storage Use Azcopy Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-files.md
description: Transfer data with AzCopy and file storage. AzCopy is a command-lin
Previously updated : 04/02/2021 Last updated : 09/29/2022
To copy to a directory within the file share, just specify the name of that dire
azcopy copy 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' --recursive --preserve-smb-permissions=true --preserve-smb-info=true ```
-If you specify the name of a directory that does not exist in the file share, AzCopy creates a new directory by that name.
+If you specify the name of a directory that doesn't exist in the file share, AzCopy creates a new directory by that name.
### Upload the contents of a directory
For detailed reference, see the [azcopy copy](storage-ref-azcopy-copy.md) refere
#### Download from a share snapshot
-You can download a specific version of a file or directory by referencing the **DateTime** value of a share snapshot. To learn more about share snapshots see [Overview of share snapshots for Azure Files](../files/storage-snapshots-files.md).
+You can download a specific version of a file or directory by referencing the **DateTime** value of a share snapshot. To learn more about share snapshots, see [Overview of share snapshots for Azure Files](../files/storage-snapshots-files.md).
**Syntax**
azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileSh
## Copy files between storage accounts
-You can use AzCopy to copy files to other storage accounts. The copy operation is synchronous so when the command returns, that indicates that all files have been copied.
+You can use AzCopy to copy files to other storage accounts. The copy operation is synchronous so all files are copied when the command returns.
AzCopy uses [server-to-server](/rest/api/storageservices/put-block-from-url) [APIs](/rest/api/storageservices/put-page-from-url), so data is copied directly between storage servers. These copy operations don't use the network bandwidth of your computer. You can increase the throughput of these operations by setting the value of the `AZCOPY_CONCURRENCY_VALUE` environment variable. To learn more, see [Increase Concurrency](storage-use-azcopy-optimize.md#increase-concurrency).
-You can also copy specific versions of a files by referencing the **DateTime** value of a share snapshot. To learn more about share snapshots see [Overview of share snapshots for Azure Files](../files/storage-snapshots-files.md).
+You can also copy specific versions of a file by referencing the **DateTime** value of a share snapshot. To learn more about share snapshots, see [Overview of share snapshots for Azure Files](../files/storage-snapshots-files.md).
This section contains the following examples:
storage Storage Use Azcopy V10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-v10.md
description: AzCopy is a command-line utility that you can use to copy data to,
Previously updated : 06/14/2022 Last updated : 09/23/2022
First, download the AzCopy V10 executable file to any directory on your computer
These files are compressed as a zip file (Windows and Mac) or a tar file (Linux). To download and decompress the tar file on Linux, see the documentation for your Linux distribution.
-For detailed information on AzCopy releases see the [AzCopy release page](https://github.com/Azure/azure-storage-azcopy/releases).
+For detailed information on AzCopy releases, see the [AzCopy release page](https://github.com/Azure/azure-storage-azcopy/releases).
> [!NOTE] > If you want to copy data to and from your [Azure Table storage](../tables/table-storage-overview.md) service, then install [AzCopy version 7.3](https://aka.ms/downloadazcopynet).
The following table lists all AzCopy v10 commands. Each command links to a refer
|[azcopy make](storage-ref-azcopy-make.md?toc=/azure/storage/blobs/toc.json)|Creates a container or file share.| |[azcopy remove](storage-ref-azcopy-remove.md?toc=/azure/storage/blobs/toc.json)|Delete blobs or files from an Azure storage account.| |[azcopy sync](storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json)|Replicates the source location to the destination location.|
+|[azcopy set-properties](storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json)|Change the access tier of one or more blobs and replace (overwrite) the metadata, and index tags of one or more blobs.|
> [!NOTE] > AzCopy does not have a command to rename files.
The URL appears in the output of this command. Your script can then download AzC
#### Escape special characters in SAS tokens
-In batch files that have the `.cmd` extension, you'll have to escape the `%` characters that appear in SAS tokens. You can do that by adding an additional `%` character next to existing `%` characters in the SAS token string.
+In batch files that have the `.cmd` extension, you'll have to escape the `%` characters that appear in SAS tokens. You can do that by adding an extra `%` character next to existing `%` characters in the SAS token string.
#### Run scripts by using Jenkins
If you plan to use [Jenkins](https://jenkins.io/) to run scripts, make sure to p
## Use in Azure Storage Explorer
-[Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) uses AzCopy to perform all of its data transfer operations. You can use [Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) if you want to leverage the performance advantages of AzCopy, but you prefer to use a graphical user interface rather than the command line to interact with your files.
+[Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) uses AzCopy to perform all of its data transfer operations. You can use [Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) if you want to apply the performance advantages of AzCopy, but you prefer to use a graphical user interface rather than the command line to interact with your files.
Storage Explorer uses your account key to perform operations, so after you sign into Storage Explorer, you won't need to provide additional authorization credentials.
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
Previously updated : 08/29/2022 Last updated : 09/29/2022 # Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares
-[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) using three different methods: on-premises Active Directory Domain Services (AD DS), Azure Active Directory Domain Services (Azure AD DS), and Azure Active Directory (Azure AD) Kerberos for hybrid identities (preview). We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for authentication. The setup is different depending on the domain service you choose. This article focuses on enabling and configuring Azure AD DS for authentication with Azure file shares.
+[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) using three different methods:
-If you're new to Azure file shares, we recommend reading our [planning guide](storage-files-planning.md) before reading the following series of articles.
+- On-premises Active Directory Domain Services (AD DS)
+- Azure Active Directory Domain Services (Azure AD DS)
+- Azure Active Directory (Azure AD) Kerberos for hybrid identities (preview)
+
+We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for authentication. The setup is different depending on the domain service you choose. This article focuses on enabling and configuring on-premises AD DS for authentication with Azure file shares.
+
+If you're new to Azure Files, we recommend reading our [planning guide](storage-files-planning.md) before reading the following series of articles.
## Applies to | File share type | SMB | NFS |
If you're new to Azure file shares, we recommend reading our [planning guide](st
- Supports single sign-on experience. - Only supported on clients running OS versions Windows 8/Windows Server 2012 or newer. - Only supported against the AD forest that the storage account is registered to. You can only access Azure file shares with the AD DS credentials from a single forest by default. If you need to access your Azure file share from a different forest, make sure that you have the proper forest trust configured, see the [FAQ](storage-files-faq.md#ad-ds--azure-ad-ds-authentication) for details.-- Does not support authentication against computer accounts created in AD DS.-- Does not support authentication against Network File System (NFS) file shares.-- Does not support using CNAME to mount file shares.
+- Doesn't support authentication against computer accounts created in AD DS.
+- Doesn't support authentication against Network File System (NFS) file shares.
+- Doesn't support using CNAME to mount file shares.
-When you enable AD DS for Azure file shares over SMB, your AD DS-joined machines can mount Azure file shares using your existing AD DS credentials. This capability can be enabled with an AD DS environment hosted either in on-premises machines or hosted in Azure.
+When you enable AD DS for Azure file shares over SMB, your AD DS-joined machines can mount Azure file shares using your existing AD DS credentials. This capability can be enabled with an AD DS environment hosted either in on-premises machines or hosted on a virtual machine (VM) in Azure.
## Videos
-To help you setup Azure Files AD authentication for some common use cases, we published two videos with step by step guidance for the following scenarios:
+To help you set up identity-based authentication for some common use cases, we published two videos with step-by-step guidance for the following scenarios:
| Replacing on-premises file servers with Azure Files (including setup on private link for files and AD authentication) | Using Azure Files as the profile container for Azure Virtual Desktop (including setup on AD authentication and FSLogix configuration) | |-|-|
Before you enable AD DS authentication for Azure file shares, make sure you've c
- Domain-join an on-premises machine or an Azure VM to on-premises AD DS. For information about how to domain-join, refer to [Join a Computer to a Domain](/windows-server/identity/ad-fs/deployment/join-a-computer-to-a-domain).
- If your machine is not domain joined to an AD DS, you may still be able to leverage AD credentials for authentication if your machine has line of sight to the AD domain controller.
+ If your machine isn't domain joined to an AD DS, you may still be able to leverage AD credentials for authentication if your machine has line of sight to the AD domain controller.
- Select or create an Azure storage account. For optimal performance, we recommend that you deploy the storage account in the same region as the client from which you plan to access the share. Then, [mount the Azure file share](storage-how-to-use-files-windows.md) with your storage account key. Mounting with the storage account key verifies connectivity.
- Make sure that the storage account containing your file shares isn't already configured for Azure AD DS Authentication. If Azure Files Azure AD DS authentication is enabled on the storage account, it needs to be disabled before changing to use on-premises AD DS. This implies that existing ACLs configured in Azure AD DS environment will need to be reconfigured for proper permission enforcement.
-
+ Make sure that the storage account containing your file shares isn't already configured for identity-based authentication. If an AD source is already enabled on the storage account, you must disable it before enabling on-premises AD DS.
If you experience issues in connecting to Azure Files, refer to [the troubleshooting tool we published for Azure Files mounting errors on Windows](https://azure.microsoft.com/blog/new-troubleshooting-diagnostics-for-azure-files-mounting-errors-on-windows/). - - Make any relevant networking configuration prior to enabling and configuring AD DS authentication to your Azure file shares. See [Azure Files networking considerations](storage-files-networking-overview.md) for more information. ## Regional availability
Azure Files authentication with AD DS is available in [all Azure Public, China a
If you plan to enable any networking configurations on your file share, we recommend you read the [networking considerations](./storage-files-networking-overview.md) article and complete the related configuration before enabling AD DS authentication.
-Enabling AD DS authentication for your Azure file shares allows you to authenticate to your Azure file shares with your on-premises AD DS credentials. Further, it allows you to better manage your permissions to allow granular access control. Doing this requires synching identities from on-premises AD DS to Azure AD with AD Connect. You control the share level access with identities synced to Azure AD while managing file/share level access with on-premises AD DS credentials.
+Enabling AD DS authentication for your Azure file shares allows you to authenticate to your Azure file shares with your on-premises AD DS credentials. Further, it allows you to better manage your permissions to allow granular access control. Doing this requires synching identities from on-premises AD DS to Azure AD with AD Connect. You assign share-level permissions to hybrid identities synced to Azure AD while managing file/directory level access using Windows ACLs.
-Next, follow the steps below to set up Azure Files for AD DS Authentication:
+Follow these steps to set up Azure Files for AD DS authentication:
1. [Part one: enable AD DS authentication on your storage account](storage-files-identity-ad-ds-enable.md)
-1. [Part two: assign access permissions for a share to the Azure AD identity (a user, group, or service principal) that is in sync with the target AD identity](storage-files-identity-ad-ds-assign-permissions.md)
+1. [Part two: assign share-level permissions to the Azure AD identity (a user, group, or service principal) that is in sync with the target AD identity](storage-files-identity-ad-ds-assign-permissions.md)
1. [Part three: configure Windows ACLs over SMB for directories and files](storage-files-identity-ad-ds-configure-permissions.md)
Next, follow the steps below to set up Azure Files for AD DS Authentication:
1. [Update the password of your storage account identity in AD DS](storage-files-identity-ad-ds-update-password.md)
-The following diagram illustrates the end-to-end workflow for enabling Azure AD authentication over SMB for Azure file shares.
+The following diagram illustrates the end-to-end workflow for enabling AD DS authentication over SMB for Azure file shares.
![Files AD workflow diagram](media/storage-files-active-directory-domain-services-enable/diagram-files-ad.png)
stream-analytics Stream Analytics User Assigned Managed Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-user-assigned-managed-identity-overview.md
Previously updated : 03/22/2022 Last updated : 09/29/2022 # User-assigned managed identities for Azure Stream Analytics (preview)
After creating your user-assigned identity and configuring your input and output
2. Under **connection status** click on **try regranting access** to switch from system-assigned to user-assigned. 3. Wait for a few minutes for the input/output to be granted access to the job.
-You can select each input and output on the endpoint management to manually configure an adapter to the job.
+> [!NOTE]
+> You can select each input and output on the endpoint management to manually configure an adapter to the job.
++
+## Other scenarios and limitations
+With support for both system-assigned identity and user-assigned identity, here are some scenarios and limitations to be aware of when configuring your Azure stream analytics job:
+
+1. You can switch from using system-assigned identity to user-assigned identity and vice-versa. When you switch from a user-assigned identity to another identity, the user identity is not deleted since you created it. You will have to manually delete it from your storage access control list.
+2. You can switch from an existing user-assigned identity to a newly created user-assigned identity. The previous identity is not removed from storage access control list.
+3. You cannot add multiple identities to your stream analytics job.
+4. Currently we do not support deleting an identity from a stream analytics job. You can replace it with another user-assigned or system-assigned identity.
## Next steps
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/continuous-integration-delivery.md
If you're using Git integration with your Azure Synapse workspace and you have a
## Troubleshoot artifacts deployment
-### Publish failed: workspace arm file is more then 20mb
-
-There is a file size limitation in git provider, for example, in Azure DevOps the maximum file size is 20Mb. Once the workspace template file size exceeds 20Mb, this error happens when you publish changes in Synapse studio, in which the workspace template file is generated and synced to git. To solve the issue, you can use the Synapse deployment task with **validate** or **validate and deploy** operation to save the workspace template file directly into the pipeline agent and without manual publish in synapse studio.
-
- ### Use the Synapse workspace deployment task to deploy Synapse artifacts
-In Azure Synapse, unlike in Data Factory, artifacts aren't Resource Manager resources. You can't use the ARM template deployment task to deploy Azure Synapse artifacts. Instead, use the Synapse workspace deployment task to deploy the artifacts, and use ARM deployment task for ARM resources (pools and workspace) deployment. Meanwhile this extension only supports Synapse templates where resources have type Microsoft.Synapse
+In Azure Synapse, unlike in Data Factory, artifacts aren't Resource Manager resources. You can't use the ARM template deployment task to deploy Azure Synapse artifacts. Instead, use the Synapse workspace deployment task to deploy the artifacts, and use ARM deployment task for ARM resources (pools and workspace) deployment. Meanwhile this task only supports Synapse templates where resources have type Microsoft.Synapse. And with this task, users can deploy changes from any branches automatically without manual clicking the publish in Synapse studio. The following are some frequently raised issues.
+
+#### 1. Publish failed: workspace arm file is more than 20MB
+There is a file size limitation in git provider, for example, in Azure DevOps the maximum file size is 20Mb. Once the workspace template file size exceeds 20Mb, this error happens when you publish changes in Synapse studio, in which the workspace template file is generated and synced to git. To solve the issue, you can use the Synapse deployment task with **validate** or **validate and deploy** operation to save the workspace template file directly into the pipeline agent and without manual publish in synapse studio.
-### Unexpected token error in release
+#### 2. Unexpected token error in release
If your parameter file has parameter values that aren't escaped, the release pipeline fails to parse the file and generates an `unexpected token` error. We suggest that you override parameters or use Key Vault to retrieve parameter values. You also can use double escape characters to resolve the issue.
-### Integration runtime deployment failed
+#### 3. Integration runtime deployment failed
If you have the workspace template generated from a managed Vnet enabled workspace and try to deploy to a regular workspace or vice versa, this error happens.
-### Unexpected character encountered while parsing value
+#### 4. Unexpected character encountered while parsing value
The template can not be parsed the template file. Try by escaping the back slashes, eg. \\\\Test01\\Test
-### Failed to fetch workspace info, Not found.
+#### 5. Failed to fetch workspace info, Not found
The target workspace info is not correctly configured. Please make sure the service connection which you have created, is scoped to the resource group which has the workspace.
-### Artifact deletion failed.
+#### 6. Artifact deletion failed
The extension will compare the artifacts present in the publish branch with the template and based on the difference it will delete them. Please make sure you are not trying to delete any artifact which is present in publish branch and some other artifact has a reference or dependency on it.
-### Deployment failed with error: json position 0
+#### 8. Deployment failed with error: json position 0
If you were trying to manually update the template, this error would happen. Please make sure that you have not manually edited the template.
-### The document creation or update failed because of invalid reference.
+#### 9. The document creation or update failed because of invalid reference
The artifact in synapse can be referenced by another one. If you have parameterized an attribute which is a referenced in an artifact, please make sure to provide correct and non null value to it
-### Failed to fetch the deployment status in notebook deployment
+#### 10. Failed to fetch the deployment status in notebook deployment
The notebook you are trying to deploy is attached to a spark pool in the workspace template file, while in the deployment the pool does not exist in the target workspace. If you don't parameterize the pool name, please make sure that having the same name for the pools between environments.
synapse-analytics Connect Synapse Link Sql Database Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database-vnet.md
+
+ Title: Configure Synapse link for Azure SQL Database with network security (Preview)
+description: Learn how to configure Synapse link for Azure SQL Database with network security (Preview).
++++ Last updated : 09/28/2022++++
+# Configure Synapse link for Azure SQL Database with network security (Preview)
+
+This article provides a guide on configuring Azure Synapse Link for Azure SQL Database with network security. Before reading this documentation, You should have known how to create and start Synapse link for Azure SQL DB from [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md).
+
+> [!IMPORTANT]
+> Azure Synapse Link for SQL is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Managed workspace Virtual Network without data exfiltration
+
+1. Create Synapse workspace with managed virtual network enabled. You will enable **managed virtual network** and select **No** to allow outbound traffic from the workspace to any target. You can learn more about managed virtual network from [this](../security/synapse-workspace-managed-vnet.md).
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-synapse-workspace-allow-outbound-traffic.png" alt-text="Screenshot of creating synapse workspace allow outbound traffic.":::
+
+1. Navigate to your Synapse workspace on Azure portal, go to **Networking** tab to enable **Allow Azure Synapse Link for Azure SQL Database to bypass firewall rules**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-bypass-firewall-rules.png" alt-text="Screenshot of enabling bypass firewall rules.":::
+
+1. Launch Synapse Studio, navigate to **Manage**, click **Integration runtimes** and select **AutoResolvingIntegrationRuntime**. On the pop-up slide, you can click **Virtual network** tab, and enable **Interactive authoring**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-interactive-authoring.png" alt-text="Screenshot of enabling interactive authoring.":::
+
+1. Now you can create a link connection from **Integrate** tab to replicate data from Azure SQL DB to Synapse SQL pool.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot of creating a link.":::
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link-sql-db.png" alt-text="Screenshot of creating link sql db.":::
+
+1. Start your link connection
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/start-link.png" alt-text="Screenshot of starting link.":::
++
+## Managed workspace Virtual Network with data exfiltration
+
+1. Create Synapse workspace with managed virtual network enabled. You will enable **managed virtual network** and select **Yes** to limit outbound traffic from the Managed workspace Virtual Network to targets through Managed private endpoints. You can learn more about managed virtual network from [this](../security/synapse-workspace-managed-vnet.md)
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-synapse-workspace-disallow-outbound-traffic.png" alt-text="Screenshot of creating synapse workspace disallow outbound traffic.":::
+
+1. Navigate to your Synapse workspace on Azure portal, go to **Networking** tab to enable **Allow Azure Synapse Link for Azure SQL Database to bypass firewall rules**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-bypass-firewall-rules.png" alt-text="Screenshot of enabling bypass firewall rules.":::
+
+1. Launch Synapse Studio, navigate to **Manage**, click **Integration runtimes** and select **AutoResolvingIntegrationRuntime**. On the pop-up slide, you can click **Virtual network** tab, and enable **Interactive authoring**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-interactive-authoring.png" alt-text="Screenshot of enabling interactive authoring.":::
+
+1. Create a linked service connecting to Azure SQL DB with managed private endpoint enabled.
+
+ * Create a linked service connecting to Azure SQL DB.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe.png" alt-text="Screenshot of new sql db linked service pe.":::
+
+ * Create a managed private endpoint in linked service for Azure SQL DB.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe1.png" alt-text="Screenshot of new sql db linked service pe1.":::
+
+ * Complete the managed private endpoint creation in the linked service for Azure SQL DB.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe2.png" alt-text="Screenshot of new sql db linked service pe2.":::
+
+ * Go to Azure portal of your SQL Server hosting Azure SQL DB as source store, approve the Private endpoint connections.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe3.png" alt-text="Screenshot of new sql db linked service pe3.":::
+
+1. Now you can create a link connection from **Integrate** tab to replicate data from Azure SQL DB to Synapse SQL pool.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot of creating a link.":::
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link-sql-db.png" alt-text="Screenshot of creating link sqldb.":::
+
+1. Start your link connection
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/start-link.png" alt-text="Screenshot of starting link.":::
+
++
+## Next steps
+
+If you are using a different type of database, see how to:
+
+* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context)
+* [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
+* [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
synapse-analytics Connect Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database.md
This article provides a step-by-step guide for getting started with Azure Synaps
## Prerequisites
-* [Create a new Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. Make sure to check "Disable Managed virtual network" and "Allow connections from all IP address" when creating Synapse workspace.
+* [Create a new Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. The current tutorial is to create Synapse link for SQL in public network. The assumption is that you have checked "Disable Managed virtual network" and "Allow connections from all IP address" when creating Synapse workspace. If you want to configure Synapse link for Azure SQL Database with network security, please also refer to [this](connect-synapse-link-sql-database-vnet.md).
* For DTU-based provisioning, make sure your Azure SQL Database service is at least Standard tier with a minimum of 100 DTUs. Free, Basic, or Standard tiers with fewer than 100 DTUs provisioned are not supported.
synapse-analytics Connect Synapse Link Sql Server 2022 Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022-vnet.md
+
+ Title: Configure Synapse link for SQL Server 2022 with network security (Preview)
+description: Learn how to configure Synapse link for SQL Server 2022 with network security (Preview).
++++ Last updated : 09/28/2022++++
+# Configure Synapse link for SQL Server 2022 with network security (Preview)
+
+This article provides a guide on configuring Azure Synapse Link for SQL Server 2022 with network security. Before reading this documentation, You should have known how to create and start Synapse link for SQL Server 2022 from [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md).
+
+> [!IMPORTANT]
+> Azure Synapse Link for SQL is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Managed workspace Virtual Network without data exfiltration
+
+1. Create Synapse workspace with managed virtual network enabled. You will enable **managed virtual network** and select **No** to allow outbound traffic from the workspace to any target. You can learn more about managed virtual network from [this](../security/synapse-workspace-managed-vnet.md).
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-synapse-workspace-allow-outbound-traffic.png" alt-text="Screenshot of creating synapse workspace allow outbound traffic.":::
+
+1. Navigate to your Synapse workspace on Azure portal, go to **Networking** tab to enable **Allow Azure Synapse Link for Azure SQL Database to bypass firewall rules**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-bypass-firewall-rules.png" alt-text="Screenshot of enabling bypass firewall rules.":::
+
+1. Launch Synapse Studio, navigate to **Manage**, click **Integration runtimes** and select **AutoResolvingIntegrationRuntime**. On the pop-up slide, you can click **Virtual network** tab, and enable **Interactive authoring**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-interactive-authoring.png" alt-text="Screenshot of enabling interactive authoring.":::
+
+1. Now you can create a link connection from **Integrate** tab to replicate data from SQL Server 2022 to Synapse SQL pool.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot of creating a link.":::
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link-sql-server.png" alt-text="Screenshot of creating link sql server.":::
+
+1. Start your link connection
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/start-link.png" alt-text="Screenshot of starting a link.":::
++
+## Managed workspace Virtual Network with data exfiltration
+
+1. Create Synapse workspace with managed virtual network enabled. You will enable **managed virtual network** and select **Yes** to limit outbound traffic from the Managed workspace Virtual Network to targets through Managed private endpoints. You can learn more about managed virtual network from [this](../security/synapse-workspace-managed-vnet.md)
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-synapse-workspace-disallow-outbound-traffic.png" alt-text="Screenshot of creating synapse workspace disallow outbound traffic.":::
+
+1. Navigate to your Synapse workspace on Azure portal, go to **Networking** tab to enable **Allow Azure Synapse Link for Azure SQL Database to bypass firewall rules**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-bypass-firewall-rules.png" alt-text="Screenshot of enabling bypass firewall rules.":::
+
+1. Launch Synapse Studio, navigate to **Manage**, click **Integration runtimes** and select **AutoResolvingIntegrationRuntime**. On the pop-up slide, you can click **Virtual network** tab, and enable **Interactive authoring**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-interactive-authoring.png" alt-text="Screenshot of enabling interactive authoring.":::
+
+1. Create a linked service connecting to SQL Server 2022. You can get more details from [this](connect-synapse-link-sql-server-2022.md#create-linked-service-for-your-source-sql-server-2022).
+
+1. Add role assignment to make sure that you have granted your Synapse workspace managed identity permissions to ADLS Gen2 storage account used as the landing zone. You can get more details from [this](connect-synapse-link-sql-server-2022.md#create-linked-service-to-connect-to-your-landing-zone-on-azure-data-lake-storage-gen2).
+
+1. Create a linked service connecting to ADLS Gen2 storage(landing zone) with managed private endpoint enabled.
+
+ * Create a managed private endpoint in linked service for ADLS Gen2 storage.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-server-linked-service-pe1.png" alt-text="Screenshot of new sql db linked service pe1.":::
+
+ * Complete the managed private endpoint creation in the linked service for ADLS Gen2 storage.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-server-linked-service-pe2.png" alt-text="Screenshot of new sql db linked service pe2.":::
+
+ * Go to Azure portal of your ADLS Gen2 storage as landing zone, approve the Private endpoint connections.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-server-linked-service-pe3.png" alt-text="Screenshot of new sql db linked service pe3.":::
+
+ * Complete the creation of linked service for ADLS Gen2 storage.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-server-linked-service-pe4.png" alt-text="Screenshot of new sql db linked service pe4.":::
+
+1. Now you can create a link connection from **Integrate** tab to replicate data from SQL Server 2022 to Synapse SQL pool.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot of creating a link.":::
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link-sql-server.png" alt-text="Screenshot of creating link sqldb.":::
+
+1. Start your link connection
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/start-link.png" alt-text="Screenshot of starting link.":::
+
++
+## Next steps
+
+If you are using a different type of database, see how to:
+
+* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context)
+* [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
+* [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
synapse-analytics Connect Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022.md
# Get started with Azure Synapse Link for SQL Server 2022 (Preview)
-This article provides a step-by-step guide for getting started with Azure Synapse Link for SQL Server 2022. For more information, see [Get started with Azure Synapse Link for SQL Server 2022 (Preview)](sql-server-2022-synapse-link.md).
+This article provides a step-by-step guide for getting started with Azure Synapse Link for SQL Server 2022. For more information, see [Azure Synapse Link for SQL Server 2022](sql-server-2022-synapse-link.md).
> [!IMPORTANT] > Azure Synapse Link for SQL is currently in PREVIEW.
This article provides a step-by-step guide for getting started with Azure Synaps
## Prerequisites
-* [Create a new Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. Ensure to check "Disable Managed virtual network" and "Allow connections from all IP address" when creating Synapse workspace. If you have a workspace created after May 24, 2022, you do not need to create a new workspace.
+* [Create a new Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. The current tutorial is to create Synapse link for SQL in public network. The assumption is that you have checked "Disable Managed virtual network" and "Allow connections from all IP address" when creating Synapse workspace. If you want to configure Synapse link for SQL Server 2022 with network security, please also refer to [this](connect-synapse-link-sql-server-2022-vnet.md).
+ * Create an Azure Data Lake Storage Gen2 account (different from the account created with the Azure Synapse Analytics workspace) used as the landing zone to stage the data submitted by SQL Server 2022. See [how to create a Azure Data Lake Storage Gen2 account](../../storage/blobs/create-data-lake-storage-account.md) article for more details.
virtual-desktop Manage App Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/manage-app-groups.md
If you've already created a host pool and session host VMs using the Azure porta
- Under **Application source**, select **Start menu** from the drop-down menu. Next, under **Application**, choose the application from the drop-down menu. > [!div class="mx-imgBorder"]
- > ![A screenshot of the add application screen with the Start menu selected.](media/add-app-start.png)
+ > ![A screenshot of the add application screen. The user has selected the Character Map as the application source and entered Character Map in the display name field.](media/add-app-start.png)
- In **Display name**, enter the name for the application that will be shown to the user on their client.
If you've already created a host pool and session host VMs using the Azure porta
- Select **Save**. > [!div class="mx-imgBorder"]
- > ![A screenshot of the add application page with file path selected.](media/add-app-file.png)
+ > ![A screenshot of the add application page. The user has entered the file path to the 7-Zip File Manager app.](media/add-app-file.png)
14. Repeat this process for every application you want to add to the application group.
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
There are different automation and deployment options available depending on whi
|--|::|::|::|::| |Windows 11 Enterprise multi-session|Yes|Yes|Yes|Yes| |Windows 11 Enterprise|Yes|Yes|No|No|
-|Windows 10 Enterprise multi-session, version 1909 and later|Yes|Yes|Yes|Yes|
-|Windows 10 Enterprise, version 1909 and later|Yes|Yes|No|No|
+|Windows 10 Enterprise multi-session, version 20H2 and later|Yes|Yes|Yes|Yes|
+|Windows 10 Enterprise, version 20H2 and later|Yes|Yes|No|No|
|Windows 7 Enterprise|Yes|Yes|No|No| |Windows Server 2022|Yes|Yes|No|No| |Windows Server 2019|Yes|Yes|Yes|Yes|
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
# Using Application Health extension with virtual machine scale sets
-Monitoring your application health is an important signal for managing and upgrading your deployment. Azure virtual machine scale sets provide support for [rolling upgrades](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) including [automatic OS-image upgrades](virtual-machine-scale-sets-automatic-upgrade.md), which rely on health monitoring of the individual instances to upgrade your deployment. You can also use health extension to monitor the application health of each instance in your scale set and perform instance repairs using [automatic instance repairs](virtual-machine-scale-sets-automatic-instance-repairs.md).
+Monitoring your application health is an important signal for managing and upgrading your deployment. Azure Virtual Machine Scale Sets provide support for [Rolling Upgrades](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) including [Automatic OS-Image Upgrades](virtual-machine-scale-sets-automatic-upgrade.md) and [Automatic VM Guest Patching](https://learn.microsoft.com/azure/virtual-machines/automatic-vm-guest-patching), which rely on health monitoring of the individual instances to upgrade your deployment. You can also use Application Health Extension to monitor the application health of each instance in your scale set and perform instance repairs using [Automatic Instance Repairs](virtual-machine-scale-sets-automatic-instance-repairs.md).
This article describes how you can use the Application Health extension to monitor the health of your applications deployed on virtual machine scale sets.
virtual-machines Disks Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-cross-tenant-customer-managed-keys.md
If you have questions about cross-tenant customer-managed keys with managed disk
## Limitations
-Currently this feature is only available in the North Central US, West Central US, and West US regions. Managed Disks and the customer's Key Vault must be in the same Azure region, but they can be in different subscriptions. This feature doesn't support Ultra Disks or Azure Premium SSD v2 managed disks.
+- Currently this feature is only available in the North Central US, West Central US, and West US regions.
+- Managed Disks and the customer's Key Vault must be in the same Azure region, but they can be in different subscriptions.
+- This feature doesn't support Ultra Disks or Azure Premium SSD v2 managed disks.
[!INCLUDE [active-directory-msi-cross-tenant-cmk-overview](../../includes/active-directory-msi-cross-tenant-cmk-overview.md)]
virtual-machines Azure Hybrid Benefit Byos Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-byos-linux.md
Title: Azure Hybrid Benefit for BYOS Linux virtual machines
+ Title: Azure Hybrid Benefit BYOS to PAYG capability
description: Learn how Azure Hybrid Benefit can provide updates and support for Linux virtual machines.
-# Explore Azure Hybrid Benefit for bring-your-own-subscription Linux virtual machines
+# Explore Azure Hybrid Benefit bring-your-own-subscription to pay-as-you-go conversion for Linux virtual machines
-Azure Hybrid Benefit provides software updates and integrated support directly from Azure for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines. Azure Hybrid Benefit for bring-your-own-subscription (BYOS) virtual machines is a licensing benefit that lets you switch RHEL and SLES BYOS virtual machines generated from custom on-premises images or from Azure Marketplace to pay-as-you-go billing.
+Azure Hybrid Benefit now provides software updates and integrated support directly from Azure infrastructure for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines. Azure Hybrid Benefit for bring-your-own-subscription (BYOS) virtual machines is a licensing benefit that lets you switch RHEL and SLES BYOS virtual machines generated from custom on-premises images or from Azure Marketplace to pay-as-you-go billing.
>[!IMPORTANT] > To do the reverse and switch from a RHEL pay-as-you-go virtual machine or SLES pay-as-you-go virtual machine to a BYOS virtual machine, see [Explore Azure Hybrid Benefit for pay-as-you-go Linux virtual machines](./azure-hybrid-benefit-linux.md).
Azure Hybrid Benefit converts BYOS billing to pay-as-you-go, so that you pay onl
## Which Linux virtual machines qualify for Azure Hybrid Benefit?
-Azure Hybrid Benefit for BYOS virtual machines is available to all RHEL and SLES virtual machines that come from a custom image. It's also available to all RHEL and SLES BYOS virtual machines that come from an Azure Marketplace image.
+Azure Hybrid Benefit BYOS to PAYG capability is available to all RHEL and SLES virtual machines that come from a custom image. It's also available to all RHEL and SLES BYOS virtual machines that come from an Azure Marketplace image.
-Azure dedicated host instances and SQL hybrid benefits are not eligible for Azure Hybrid Benefit if you already use Azure Hybrid Benefit with Linux virtual machines. Azure Hybrid Benefit for BYOS virtual machines does not support virtual machine scale sets and reserved instances (RIs).
+Azure dedicated host instances and SQL hybrid benefits are not eligible for Azure Hybrid Benefit if you already use Azure Hybrid Benefit with Linux virtual machines. Azure Hybrid Benefit BYOS to PAYG capability does not support virtual machine scale sets and reserved instances (RIs).
## Get started
After you successfully install the `AHBForSLES` extension, you can use the `az v
### Red Hat compliance
-Customers who use Azure Hybrid Benefit for BYOS virtual machines for RHEL agree to the standard [legal terms](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Cloud_Software_Subscription_Agreement_for_Microsoft_Azure.pdf) and [privacy statement](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Privacy_Statement_for_Microsoft_Azure.pdf) associated with the Azure Marketplace RHEL offerings.
+Customers who use Azure Hybrid Benefit BYOS to PAYG capability for RHEL agree to the standard [legal terms](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Cloud_Software_Subscription_Agreement_for_Microsoft_Azure.pdf) and [privacy statement](http://www.redhat.com/licenses/cloud_CSSA/Red_Hat_Privacy_Statement_for_Microsoft_Azure.pdf) associated with the Azure Marketplace RHEL offerings.
### SUSE compliance
-If you use Azure Hybrid Benefit for BYOS virtual machines for SLES and want more information about moving from SLES pay-as-you-go to BYOS, or moving from SLES BYOS to pay-as-you-go, see [Azure Hybrid Benefit Support](https://aka.ms/suse-ahb) on the SUSE website.
+If you use Azure Hybrid Benefit BYOS to PAYG capability for SLES and want more information about moving from SLES pay-as-you-go to BYOS, or moving from SLES BYOS to pay-as-you-go, see [Azure Hybrid Benefit Support](https://aka.ms/suse-ahb) on the SUSE website.
## Frequently asked questions
-*Q: What is the licensing cost I pay with Azure Hybrid Benefit for BYOS virtual machines?*
+*Q: What is the licensing cost I pay with Azure Hybrid Benefit BYOS to PAYG capability?*
A: When you start using Azure Hybrid Benefit for BYOS virtual machines, you'll essentially convert the bring-your-own-subscription billing model to a pay-as-you-go billing model. What you pay will be similar to a software subscription cost for pay-as-you-go virtual machines.
-The following table maps the pay-as-you-go options on Azure and links to pricing information to help you understand the cost associated with Azure Hybrid Benefit for BYOS virtual machines. When you go to the pricing pages, keep the Azure Hybrid Benefit for pay-as-you-go filter off.
+The following table maps the pay-as-you-go options on Azure and links to pricing information to help you understand the cost associated with Azure Hybrid Benefit BYOS to PAYG capability. When you go to the pricing pages, keep the Azure Hybrid Benefit for pay-as-you-go filter off.
| License type | Relevant pay-as-you-go virtual machine image and pricing link | ||||
A: No, you can't. Trying to enter a license type that incorrectly matches the di
If you accidentally enter the wrong license type, remove the billing by changing the license type to empty. Then update your virtual machine to the correct license type to enable Azure Hybrid Benefit.
-*Q: What are the supported versions for RHEL with Azure Hybrid Benefit for BYOS virtual machines?*
+*Q: What are the supported versions for RHEL with Azure Hybrid Benefit BYOS to PAYG capability?*
-A: Azure Hybrid Benefit for BYOS virtual machines supports RHEL versions later than 7.4.
+A: Azure Hybrid Benefit BYOS to PAYG capability supports RHEL versions later than 7.4.
*Q: I've uploaded my own RHEL or SLES image from on-premises (via Azure Migrate, Azure Site Recovery, or otherwise) to Azure. Can I convert the billing on these images from BYOS to pay-as-you-go?* A: Yes, this capability supports images uploaded from on-premises to Azure. Follow the steps in the [Get started](#get-started) section earlier in this article.
-*Q: Can I use Azure Hybrid Benefit for BYOS virtual machines on RHEL and SLES pay-as-you-go Azure Marketplace virtual machines?*
+*Q: Can I use Azure Hybrid Benefit BYOS to PAYG capability on RHEL and SLES pay-as-you-go Azure Marketplace virtual machines?*
A: No, because these virtual machines are already pay-as-you-go. However, with Azure Hybrid Benefit, you can use the license type of `RHEL_BYOS` for RHEL virtual machines and `SLES_BYOS` for conversions of RHEL and SLES pay-as-you-go Azure Marketplace virtual machines. For more information, see [Explore Azure Hybrid Benefit for pay-as-you-go Linux virtual machines](./azure-hybrid-benefit-linux.md).
-*Q: Can I use Azure Hybrid Benefit for BYOS virtual machines on virtual machine scale sets for RHEL and SLES?*
+*Q: Can I use Azure Hybrid Benefit BYOS to PAYG capability on virtual machine scale sets for RHEL and SLES?*
-A: No. Hybrid Benefit for BYOS virtual machines isn't currently available for virtual machine scale sets.
+A: No. Azure Hybrid Benefit BYOS to PAYG capability isn't currently available for virtual machine scale sets.
-*Q: Can I use Azure Hybrid Benefit for BYOS virtual machines on a virtual machine deployed for SQL Server on RHEL images?*
+*Q: Can I use Azure Hybrid Benefit BYOS to PAYG capability on a virtual machine deployed for SQL Server on RHEL images?*
A: No, you can't. There's no plan for supporting these virtual machines.
-*Q: Can I use Azure Hybrid Benefit for BYOS virtual machines on my RHEL for Virtual Datacenters subscription?*
+*Q: Can I use Azure Hybrid Benefit BYOS to PAYG capability on my RHEL for Virtual Datacenters subscription?*
A: No. RHEL for Virtual Datacenters isn't supported on Azure at all, including Azure Hybrid Benefit.
virtual-machines Maintenance And Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-and-updates.md
Most platform updates don't affect customer VMs. When a no-impact update isn't p
Most nonzero-impact maintenance pauses the VM for less than 10 seconds. In certain cases, Azure uses memory-preserving maintenance mechanisms. These mechanisms pause the VM, typically for about 30 seconds, and preserve the memory in RAM. The VM is then resumed, and its clock is automatically synchronized.
-Memory-preserving maintenance works for more than 90 percent of Azure VMs. It doesn't work for G, L, M, N, and H series. Azure increasingly uses live-migration technologies and improves memory-preserving maintenance mechanisms to reduce the pause durations.
+Memory-preserving maintenance works for more than 90 percent of Azure VMs. It doesn't work for G, M, N, and H series. Azure increasingly uses live-migration technologies and improves memory-preserving maintenance mechanisms to reduce the pause durations.
These maintenance operations that don't require a reboot are applied one fault domain at a time. They stop if they receive any warning health signals from platform monitoring tools. Maintenance operations that do not require a reboot may occur simultaneously in paired regions or Availability Zones. For a given change, the deployment are mostly sequenced across Availability Zones and across Region pairs, but there can be overlap at the tail.
For greater control on all maintenance activities including zero-impact and rebo
### Live migration
-Live migration is an operation that doesn't require a reboot and that preserves memory for the VM. It causes a pause or freeze, typically lasting no more than 5 seconds. Except for G, M, N, and H series, all infrastructure as a service (IaaS) VMs, are eligible for live migration. Eligible VMs represent more than 90 percent of the IaaS VMs that are deployed to the Azure fleet.
+Live migration is an operation that doesn't require a reboot and that preserves memory for the VM. It causes a pause or freeze, typically lasting no more than 5 seconds. Except for G, L, M, N, and H series, all infrastructure as a service (IaaS) VMs, are eligible for live migration. Eligible VMs represent more than 90 percent of the IaaS VMs that are deployed to the Azure fleet.
> [!NOTE] > You won't recieve a notification in the Azure portal for live migration operations that don't require a reboot. To see a list of live migrations that don't require a reboot, [query for scheduled events](./windows/scheduled-events.md#query-for-events).
virtual-machines Hybrid Use Benefit Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/hybrid-use-benefit-licensing.md
Title: Azure Hybrid Benefit for Windows Server
+ Title: Explore Azure Hybrid Benefit for Windows VMs
description: Learn how to maximize your Windows Software Assurance benefits to bring on-premises licenses to Azure. Previously updated : 4/22/2018 Last updated : 9/28/2022 ms.devlang: azurecli
-# Azure Hybrid Benefit for Windows Server
+# Explore Azure Hybrid Benefit for Windows VMs
For customers with Software Assurance, Azure Hybrid Benefit for Windows Server allows you to use your on-premises Windows Server licenses and run Windows virtual machines on Azure at a reduced cost. You can use Azure Hybrid Benefit for Windows Server to deploy new virtual machines with Windows OS. This article goes over the steps on how to deploy new VMs with Azure Hybrid Benefit for Windows Server and how you can update existing running VMs. For more information about Azure Hybrid Benefit for Windows Server licensing and cost savings, see the [Azure Hybrid Benefit for Windows Server licensing page](https://azure.microsoft.com/pricing/hybrid-use-benefit/).
virtual-machines Image Builder Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-virtual-desktop.md
New-AzResourceGroup -Name $imageResourceGroup -Location $location
'Az.ImageBuilder', 'Az.ManagedServiceIdentity' | ForEach-Object {Install-Module -Name $_ -AllowPrerelease} # Create the identity
- New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName
+ New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName -Location $location
$identityNameResourceId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).Id $identityNamePrincipalId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName).PrincipalId
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/hpc/hb-hc-known-issues.md
This article attempts to list recent common issues and their solutions when using the [H-series](../../sizes-hpc.md) and [N-series](../../sizes-gpu.md) HPC and GPU VMs.
-## Memory Capacity on Standard_HB120rs_v2
-As of the week of December 6, 2021 we've temporarily reducing the amount of memory (RAM) exposed to the Standard_HB120rs_v2 VM size, otherwise known as [HBv2](../../hbv2-series.md). We've reducing the memory footprint to 432 GB from its current value of 456 GB (a 5.2% reduction). This reduction is temporary and the full memory capacity should be restored in early 2022. We've made this change to ensure to address an issue that can result in long VM deployment times or VM deployments for which not all devices function correctly. The reduction in memory capacity doesn't affect VM performance.
- ## Cache topology on Standard_HB120rs_v3 `lstopo` displays incorrect cache topology on the Standard_HB120rs_v3 VM size. It may display that thereΓÇÖs only 32 MB L3 per NUMA. However in practice, there is indeed 120 MB L3 per NUMA as expected since the same 480 MB of L3 to the entire VM is available as with the other constrained-core HBv3 VM sizes. This is a cosmetic error in displaying the correct value, which should not impact workloads.
virtual-machines Expose Sap Process Orchestration On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-process-orchestration-on-azure.md
Title: Exposing SAP legacy middleware with Azure PaaS securely
+ Title: Expose SAP legacy middleware securely with Azure PaaS
description: Learn about securely exposing SAP Process Orchestration on Azure.
Last updated 07/19/2022
-# Exposing SAP legacy middleware with Azure PaaS securely
+# Expose SAP legacy middleware securely with Azure PaaS
-Enabling internal systems and external partners to interact with SAP backends is a common requirement. Existing SAP landscapes often rely on the legacy middleware [SAP Process Orchestration](https://help.sap.com/docs/SAP_NETWEAVER_750/bbd7c67c5eb14835843976b790024ec6/8e995afa7a8d467f95a473afafafa07e.html)(PO) or [Process Integration](https://help.sap.com/docs/SAP_NETWEAVER_750/bbd7c67c5eb14835843976b790024ec6/8e995afa7a8d467f95a473afafafa07e.html)(PI) for their integration and transformation needs. For simplicity the term "SAP Process Orchestration" will be used in this article but associated with both offerings.
+Enabling internal systems and external partners to interact with SAP back ends is a common requirement. Existing SAP landscapes often rely on the legacy middleware [SAP Process Orchestration (PO)](https://help.sap.com/docs/SAP_NETWEAVER_750/bbd7c67c5eb14835843976b790024ec6/8e995afa7a8d467f95a473afafafa07e.html) or [Process Integration (PI)](https://help.sap.com/docs/SAP_NETWEAVER_750/bbd7c67c5eb14835843976b790024ec6/8e995afa7a8d467f95a473afafafa07e.html) for their integration and transformation needs. For simplicity, this article uses the term *SAP Process Orchestration* to refer to both offerings.
-This article describes configuration options on Azure with emphasis on Internet-facing implementations.
+This article describes configuration options on Azure, with emphasis on internet-facing implementations.
> [!NOTE]
-> SAP mentions [SAP IntegrationSuite](https://discovery-center.cloud.sap/serviceCatalog/integration-suite?region=all) - specifically [SAP CloudIntegration](https://help.sap.com/docs/CLOUD_INTEGRATION/368c481cd6954bdfa5d0435479fd4eaf/9af2f05c7eb04457aee5906fd8553e00.html) - running on [Business TechnologyPlatform](https://www.sap.com/products/business-technology-platform.html)(BTP) as the successor for SAP PO/PI. Both the BTP platform and the services are available on Azure. For more information, see [SAP DiscoveryCenter](https://discovery-center.cloud.sap/serviceCatalog/integration-suite?region=all&tab=service_plan&provider=azure). See SAP OSS note [1648480](https://launchpad.support.sap.com/#/notes/1648480) for more info about the maintenance support timeline for the legacy component.
+> SAP mentions [SAP Integration Suite](https://discovery-center.cloud.sap/serviceCatalog/integration-suite?region=all)--specifically, [SAP Cloud Integration](https://help.sap.com/docs/CLOUD_INTEGRATION/368c481cd6954bdfa5d0435479fd4eaf/9af2f05c7eb04457aee5906fd8553e00.html)--running on [Business Technology Platform (BTP)](https://www.sap.com/products/business-technology-platform.html) as the successor for SAP PO and PI. Both the BTP platform and the services are available on Azure. For more information, see [SAP Discovery Center](https://discovery-center.cloud.sap/serviceCatalog/integration-suite?region=all&tab=service_plan&provider=azure). For more info about the maintenance support timeline for the legacy components, see SAP OSS note [1648480](https://launchpad.support.sap.com/#/notes/1648480).
## Overview
-Existing implementations based on SAP middleware often relied on SAP's proprietary dispatching technology called [SAP WebDispatcher](https://help.sap.com/docs/ABAP_PLATFORM_NEW/683d6a1797a34730a6e005d1e8de6f22/488fe37933114e6fe10000000a421937.html). It operates on layer 7 of the [OSI model](https://en.wikipedia.org/wiki/OSI_model), acts as a reverse-proxy and addresses load balancing needs for the downstream SAP application workloads like SAP ERP, SAP Gateway, or SAP Process Orchestration.
+Existing implementations based on SAP middleware have often relied on SAP's proprietary dispatching technology called [SAP Web Dispatcher](https://help.sap.com/docs/ABAP_PLATFORM_NEW/683d6a1797a34730a6e005d1e8de6f22/488fe37933114e6fe10000000a421937.html). This technology operates on layer 7 of the [OSI model](https://en.wikipedia.org/wiki/OSI_model). It acts as a reverse proxy and addresses load-balancing needs for downstream SAP application workloads like SAP Enterprise Resource Planning (ERP), SAP Gateway, or SAP Process Orchestration.
-Dispatching approaches range from traditional reverse proxies like Apache, to Platform-as-a-Service (PaaS) options like the [Azure Load Balancer](../../../load-balancer/load-balancer-overview.md), or the opinionated SAP WebDispatcher. The overall concepts described in this article apply to the options mentioned. Have a look at SAP's [wiki](https://wiki.scn.sap.com/wiki/display/SI/Can+I+use+a+different+load+balancer+instead+of+SAP+Web+Dispatcher) for their guidance on using non-SAP load balancers.
+Dispatching approaches include traditional reverse proxies like Apache, platform as a service (PaaS) options like [Azure Load Balancer](../../../load-balancer/load-balancer-overview.md), and the opinionated SAP Web Dispatcher. The overall concepts described in this article apply to the options mentioned. For guidance on using non-SAP load balancers, see SAP's [wiki](https://wiki.scn.sap.com/wiki/display/SI/Can+I+use+a+different+load+balancer+instead+of+SAP+Web+Dispatcher).
> [!NOTE]
-> All described setups in this article assume a hub-spoke networking topology, where shared services are deployed into the hub. Given the criticality of SAP, even more isolation may be desirable. For more information, see our SAP perimeter-network design (also known as DMZ) [guide](/azure/architecture/guide/sap/sap-internet-inbound-outbound#network-design).
+> All described setups in this article assume a hub-and-spoke network topology, where shared services are deployed into the hub. Based on the criticality of SAP, you might need even more isolation. For more information, see the SAP [design guide for perimeter networks](/azure/architecture/guide/sap/sap-internet-inbound-outbound#network-design).
-## Primary Azure services used
+## Primary Azure services
-[Azure Application Gateway](../../../application-gateway/how-application-gateway-works.md) handles public [internet-based](../../../application-gateway/configuration-front-end-ip.md) and/or [internal private](../../../application-gateway/configuration-front-end-ip.md) http routing and [encrypted tunneling across Azure subscriptions](../../../application-gateway/private-link.md), [security](../../../application-gateway/features.md), and [auto-scaling](../../../application-gateway/application-gateway-autoscaling-zone-redundant.md) for instance. Azure Application Gateway is focused on exposing web applications, hence offers a Web Application Firewall. Workloads in other virtual networks (VNet) that shall communicate with SAP through the Azure Application Gateway can be connected via [private links](../../../application-gateway/private-link-configure.md) even cross-tenant.
+[Azure Application Gateway](../../../application-gateway/how-application-gateway-works.md) handles public [internet-based](../../../application-gateway/configuration-front-end-ip.md) and [internal private](../../../application-gateway/configuration-front-end-ip.md) HTTP routing, along with [encrypted tunneling across Azure subscriptions](../../../application-gateway/private-link.md). Examples include [security](../../../application-gateway/features.md) and [autoscaling](../../../application-gateway/application-gateway-autoscaling-zone-redundant.md).
+Azure Application Gateway is focused on exposing web applications, so it offers a web application firewall (WAF). Workloads in other virtual networks that will communicate with SAP through Azure Application Gateway can be connected via [private links](../../../application-gateway/private-link-configure.md), even across tenants.
-[Azure Firewall](../../../firewall/overview.md) handles public internet-based and/or internal private routing for traffic types on Layer 4-7 of the OSI model. It offers filtering and threat intelligence, which feeds directly from Microsoft Cyber Security.
-[Azure API Management](../../../api-management/api-management-key-concepts.md) handles public internet-based and/or internal private routing specifically for APIs. It offers request throttling, usage quota and limits, governance features like policies, and API keys to slice and dice services per client.
+[Azure Firewall](../../../firewall/overview.md) handles public internet-based and internal private routing for traffic types on layers 4 to 7 of the OSI model. It offers filtering and threat intelligence that feed directly from Microsoft Security.
-[VPN Gateway](../../../vpn-gateway/vpn-gateway-about-vpngateways.md) and [Azure ExpressRoute](../../../expressroute/expressroute-introduction.md) serve as entry points to on-premises networks. Both components are abbreviated on the diagrams as VPN and XR.
+[Azure API Management](../../../api-management/api-management-key-concepts.md) handles public internet-based and internal private routing specifically for APIs. It offers request throttling, usage quota and limits, governance features like policies, and API keys to break down services per client.
+
+[Azure VPN Gateway](../../../vpn-gateway/vpn-gateway-about-vpngateways.md) and [Azure ExpressRoute](../../../expressroute/expressroute-introduction.md) serve as entry points to on-premises networks. They're abbreviated in the diagrams as VPN and XR.
## Setup considerations
-Integration architecture needs differ depending on the interface used. SAP-proprietary technologies like [intermediate Document framework](https://help.sap.com/docs/SAP_DATA_SERVICES/e54136ab6a4a43e6a370265bf0a2d744/577710e16d6d1014b3fc9283b0e91070.html) (ALE/iDoc), [Business Application Programming Interface](https://help.sap.com/docs/SAP_ERP/c5a8d544836649a1af6eaef358d08e3f/4dc89000ebfc5a9ee10000000a42189b.html) (BAPI), [transactional Remote Function Calls](https://help.sap.com/docs/SAP_NETWEAVER_700/108f625f6c53101491e88dc4cf51a6cc/4899b963ee2b73e7e10000000a42189b.html) (tRFC), or plain [RFC](https://help.sap.com/docs/SAP_ERP/be79bfef64c049f88262cf6cb5de1c1f/0502cbfa1c2f184eaa6ba151d1aaf4fe.html) require a specific runtime environment and operate on layer 4-7 of the OSI model, unlike modern APIs that typically rely on http-based communication (layer 7 of the OSI model). Because of that the interfaces can't be treated the same way.
+Integration architecture needs differ, depending on the interface that an organization uses. SAP-proprietary technologies like [Intermediate Document (IDoc) framework](https://help.sap.com/docs/SAP_DATA_SERVICES/e54136ab6a4a43e6a370265bf0a2d744/577710e16d6d1014b3fc9283b0e91070.html), [Business Application Programming Interface (BAPI)](https://help.sap.com/docs/SAP_ERP/c5a8d544836649a1af6eaef358d08e3f/4dc89000ebfc5a9ee10000000a42189b.html), [transactional Remote Function Calls (tRFCs)](https://help.sap.com/docs/SAP_NETWEAVER_700/108f625f6c53101491e88dc4cf51a6cc/4899b963ee2b73e7e10000000a42189b.html), or plain [RFCs](https://help.sap.com/docs/SAP_ERP/be79bfef64c049f88262cf6cb5de1c1f/0502cbfa1c2f184eaa6ba151d1aaf4fe.html) require a specific runtime environment. They operate on layers 4 to 7 of the OSI model, unlike modern APIs that typically rely on HTP-based communication (layer 7 of the OSI model). Because of that, the interfaces can't be treated the same way.
-This article focuses on modern APIs and http (that includes integration scenarios like [AS2](https://wikipedia.org/wiki/AS2)). [FTP](https://wikipedia.org/wiki/File_Transfer_Protocol) will serve as an example to handle `non-http` integration needs. For more information about the different Microsoft load balancing solutions, see [this article](/azure/architecture/guide/technology-choices/load-balancing-overview).
+This article focuses on modern APIs and HTTP, including integration scenarios like [Applicability Statement 2 (AS2)](https://wikipedia.org/wiki/AS2). [File Transfer Protocol (FTP)](https://wikipedia.org/wiki/File_Transfer_Protocol) serves as an example to handle non-HTTP integration needs. For more information about Microsoft load-balancing solutions, see [Load-balancing options](/azure/architecture/guide/technology-choices/load-balancing-overview).
> [!NOTE]
-> SAP publishes dedicated [connectors](https://support.sap.com/en/product/connectors.html) for their proprietary interfaces. Check SAP's documentation for [Java](https://support.sap.com/en/product/connectors/jco.html), and [.NET](https://support.sap.com/en/product/connectors/msnet.html) for example. They are supported by [Microsoft Gateways](../../../data-factory/connector-sap-table.md?tabs=data-factory#prerequisites) too. Be aware that iDocs can also be posted via [http](https://blogs.sap.com/2012/01/14/post-idoc-to-sap-erp-over-http-from-any-application/).
+> SAP publishes dedicated [connectors](https://support.sap.com/en/product/connectors.html) for its proprietary interfaces. Check SAP's documentation for [Java](https://support.sap.com/en/product/connectors/jco.html) and [.NET](https://support.sap.com/en/product/connectors/msnet.html), for example. They're supported by [Microsoft gateways](../../../data-factory/connector-sap-table.md?tabs=data-factory#prerequisites) too. Be aware that IDocs can also be posted via [HTTP](https://blogs.sap.com/2012/01/14/post-idoc-to-sap-erp-over-http-from-any-application/).
-Security concerns require the usage of [Firewalls](../../../firewall/features.md) for lower-level protocols and [Web Application Firewalls](../../../web-application-firewall/overview.md) (WAF) to address http-based traffic with [Transport Layer Security](https://wikipedia.org/wiki/Transport_Layer_Security) (TLS). To be effective, TLS sessions need to be terminated at the WAF level. Supporting zero-trust approaches, it's advisable to [re-encrypt](../../../application-gateway/ssl-overview.md) again afterwards to ensure end-to-encryption.
+Security concerns require the usage of [firewalls](../../../firewall/features.md) for lower-level protocols and [WAFs](../../../web-application-firewall/overview.md) to address HTTP-based traffic with [Transport Layer Security (TLS)](https://wikipedia.org/wiki/Transport_Layer_Security). To be effective, TLS sessions need to be terminated at the WAF level. To support zero-trust approaches, we recommend that you [re-encrypt](../../../application-gateway/ssl-overview.md) again afterward to provide end-to-encryption.
-Integration protocols such as AS2 may raise alerts by standard WAF rules. We recommend using our [Application Gateway WAF triage workbook](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20WAF/Workbook%20-%20AppGw%20WAF%20Triage%20Workbook) to identify and better understand why the rule is triggered, so you can remediate effectively and securely. The standard rules are provided by Open Web Application Security Project (OWASP). For more information, see the [SAP on Azure webcast](https://www.youtube.com/watch?v=kAnWTqKlGGo) for a detailed video session on this topic with emphasis on SAP Fiori exposure.
+Integration protocols such as AS2 can raise alerts by using standard WAF rules. We recommend using the [Application Gateway WAF triage workbook](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20WAF/Workbook%20-%20AppGw%20WAF%20Triage%20Workbook) to identify and better understand why the rule is triggered, so you can remediate effectively and securely. Open Web Application Security Project (OWASP) provides the standard rules. For a detailed video session on this topic with emphasis on SAP Fiori exposure, see the [SAP on Azure webcast](https://www.youtube.com/watch?v=kAnWTqKlGGo).
-In addition, security can be further enhanced with [mutual TLS](../../../application-gateway/mutual-authentication-overview.md) (mTLS) - also referred to as mutual authentication. Unlike normal TLS, it also verifies the client identity.
+You can further enhance security by using [mutual TLS (mTLS)](../../../application-gateway/mutual-authentication-overview.md), which is also called mutual authentication. Unlike normal TLS, it verifies the client identity.
> [!NOTE]
-> VM pools require a load balancer. For better readability it is not shown explicitly on the diagrams below.
+> Virtual machine (VM) pools require a load balancer. For better readability, the diagrams in this article don't show a load balancer.
> [!NOTE]
-> In case SAP specific balancing features provided by the SAP WebDispatcher aren't required, they can be replaced by an Azure Load Balancer giving the benefit of a managed PaaS offering compared to an Infrastructure-as-a-Service setup.
+> If you don't need SAP-specific balancing features that SAP Web Dispatcher provides, you can replace them with Azure Load Balancer. This replacement gives the benefit of a managed PaaS offering instead of an infrastructure as a service (IaaS) setup.
-## Scenario 1.A: Inbound http connectivity focused
+## Scenario: Inbound HTTP connectivity focused
-The SAP WebDispatcher **doesn't** offer a Web Application Firewall. Because of that Azure Application Gateway is recommended for a more secure setup. The WebDispatcher and "Process Orchestration" remain in charge to protect the SAP backend from request overload with [sizing guidance](https://help.sap.com/docs/ABAP_PLATFORM_NEW/683d6a1797a34730a6e005d1e8de6f22/489ab14248c673e8e10000000a42189b.html) and [concurrent request limits](https://help.sap.com/docs/ABAP_PLATFORM/683d6a1797a34730a6e005d1e8de6f22/3a450194bf9c4797afb6e21b4b22ad2a.html). There's **no** throttling capability available in the SAP workloads.
+SAP Web Dispatcher doesn't offer a WAF. Because of that, we recommend Azure Application Gateway for a more secure setup. SAP Web Dispatcher and Process Orchestration remain in charge to help protect the SAP back end from request overload with [sizing guidance](https://help.sap.com/docs/ABAP_PLATFORM_NEW/683d6a1797a34730a6e005d1e8de6f22/489ab14248c673e8e10000000a42189b.html) and [concurrent request limits](https://help.sap.com/docs/ABAP_PLATFORM/683d6a1797a34730a6e005d1e8de6f22/3a450194bf9c4797afb6e21b4b22ad2a.html). No throttling capability is available in the SAP workloads.
-Unintentional access can be avoided through [Access Control Lists](https://help.sap.com/docs/ABAP_PLATFORM_NEW/683d6a1797a34730a6e005d1e8de6f22/0c39b84c3afe4d2d9f9f887a32914ecd.html) on the SAP WebDispatcher.
+You can avoid unintentional access through [access control lists](https://help.sap.com/docs/ABAP_PLATFORM_NEW/683d6a1797a34730a6e005d1e8de6f22/0c39b84c3afe4d2d9f9f887a32914ecd.html) on SAP Web Dispatcher.
-One of the scenarios for SAP Process Orchestration communication is inbound flow. Traffic may originate from On-premises, external apps/users or an internal system. See below an example with focus on https.
+One of the scenarios for SAP Process Orchestration communication is inbound flow. Traffic might originate from on-premises, external apps or users, or an internal system. The following example focuses on HTTPS.
-## Scenario 1.B: Outbound http/ftp connectivity focused
+## Scenario: Outbound HTTP/FTP connectivity focused
-For the reverse communication direction "Process Orchestration" may leverage the VNet routing to reach workloads on-premises or Internet-based targets via the Internet breakout. Azure Application Gateway acts as a reverse proxy in such scenarios. For `non-http` communication, consider adding Azure Firewall. For more information, see [Scenario 4](#scenario-4-file-based) and [Comparing Gateway components](#comparing-gateway-setups).
+For the reverse communication direction, SAP Process Orchestration can use virtual network routing to reach on-premises workloads or internet-based targets via the internet breakout. Azure Application Gateway acts as a reverse proxy in such scenarios. For non-HTTP communication, consider adding Azure Firewall. For more information, see [Scenario: File based](#scenario-file-based) and [Comparison of Gateway components](#comparison-of-gateway-setups) later in this article.
-The outbound scenario below shows two possible methods. One using HTTPS via the Azure Application Gateway calling a Webservice (for example SOAP adapter) and the other using SFTP (FTP over SSH) via the Azure Firewall transferring files to a business partner's S/FTP server.
+The following outbound scenario shows two possible methods. One uses HTTPS via Azure Application Gateway calling a web service (for example, SOAP adapter). The other uses FTP over SSH (SFTP) via Azure Firewall transferring files to a business partner's SFTP server.
:::image type="content" source="media/expose-sap-process-orchestration-on-azure/outbound-1b.png" alt-text="Diagram that shows an outbound scenario with SAP Process Orchestration on Azure.":::
-## Scenario 2: API Management focused
+## Scenario: API Management focused
-Compared to scenario 1, the introduction of [Azure API Management (APIM) in internal mode](../../../api-management/api-management-using-with-internal-vnet.md) (private IP only and VNet integration) adds built-in capabilities like:
+Compared to the scenarios for inbound and outbound connectivity, the introduction of [Azure API Management in internal mode](../../../api-management/api-management-using-with-internal-vnet.md) (private IP only and virtual network integration) adds built-in capabilities like:
-- [Throttling](../../../api-management/api-management-sample-flexible-throttling.md),-- [API governance](/azure/architecture/example-scenario/devops/automated-api-deployments-apiops),-- Additional security options like [modern authentication flows](../../../api-management/api-management-howto-protect-backend-with-aad.md),-- [Azure Active Directory](../../../active-directory/develop/active-directory-v2-protocols.md) integration and-- The opportunity to add the SAP APIs to a central company-wide API solution.
+- [Throttling](../../../api-management/api-management-sample-flexible-throttling.md).
+- [API governance](/azure/architecture/example-scenario/devops/automated-api-deployments-apiops).
+- Additional security options like [modern authentication flows](../../../api-management/api-management-howto-protect-backend-with-aad.md).
+- [Azure Active Directory](../../../active-directory/develop/active-directory-v2-protocols.md) integration.
+- The opportunity to add SAP APIs to a central API solution across the company.
:::image type="content" source="media/expose-sap-process-orchestration-on-azure/inbound-api-management-2.png" alt-text="Diagram that shows an inbound scenario with Azure API Management and SAP Process Orchestration on Azure.":::
-When a web application firewall isn't required, Azure API Management can be deployed in external mode (using a public IP). That simplifies the setup, while keeping the throttling and API governance capabilities. [Basic protection](/azure/cloud-services/cloud-services-configuration-and-management-faq#what-are-the-features-and-capabilities-that-azure-basic-ips-ids-and-ddos-provides-) is implemented for all Azure PaaS offerings.
+When you don't need a WAF, you can deploy Azure API Management in external mode by using a public IP address. That deployment simplifies the setup while keeping the throttling and API governance capabilities. [Basic protection](/azure/cloud-services/cloud-services-configuration-and-management-faq#what-are-the-features-and-capabilities-that-azure-basic-ips-ids-and-ddos-provides-) is implemented for all Azure PaaS offerings.
:::image type="content" source="media/expose-sap-process-orchestration-on-azure/inbound-api-management-ext-2.png" alt-text="Diagram that shows an inbound scenario with Azure API Management in external mode and SAP Process Orchestration.":::
-## Scenario 3: Global reach
+## Scenario: Global reach
-Azure Application Gateway is a region-bound service. Compared to the above scenarios [Azure Front Door](../../../frontdoor/front-door-overview.md) ensures cross-region global routing including a web application firewall. Look at [this comparison](/azure/architecture/guide/technology-choices/load-balancing-overview) for more details about the differences.
+Azure Application Gateway is a region-bound service. Compared to the preceding scenarios, [Azure Front Door](../../../frontdoor/front-door-overview.md) ensures cross-region global routing, including a web application firewall. For details about the differences, see [this comparison](/azure/architecture/guide/technology-choices/load-balancing-overview).
-> [!NOTE]
-> Condensed SAP WebDispatcher, Process Orchestration, and backend into single image for better readability.
+The following diagram condenses SAP Web Dispatcher, SAP Process Orchestration, and the back end into single image for better readability.
:::image type="content" source="media/expose-sap-process-orchestration-on-azure/inbound-global-3.png" alt-text="Diagram that shows a global reach scenario with SAP Process Orchestration on Azure.":::
-## Scenario 4: File-based
+## Scenario: File-based
-`Non-http` protocols like FTP can't be addressed with Azure API Management, Application Gateway, or Front Door like shown in scenarios beforehand. Instead the managed Azure Firewall or equivalent Network Virtual Appliance (NVA) takes over the role of securing inbound requests.
+Non-HTTP protocols like FTP can't be addressed with Azure API Management, Application Gateway, or Azure Front Door as shown in the preceding scenarios. Instead, the managed Azure Firewall instance or the equivalent network virtual appliance (NVA) takes over the role of securing inbound requests.
-Files need to be stored before they can be processed by SAP. It's recommended to use [SFTP](../../../storage/blobs/secure-file-transfer-protocol-support.md). Azure Blob Storage supports SFTP natively.
+Files need to be stored before SAP can process them. We recommend that you use [SFTP](../../../storage/blobs/secure-file-transfer-protocol-support.md). Azure Blob Storage supports SFTP natively.
> [!NOTE]
-> At the time of writing this article [the feature](../../../storage/blobs/secure-file-transfer-protocol-support.md) is still in preview.
+> [The Azure Blob Storage SFTP feature](../../../storage/blobs/secure-file-transfer-protocol-support.md) is currently in preview.
-There are alternative SFTP options available on the Azure Marketplace if necessary.
+Alternative SFTP options are available in Azure Marketplace if necessary.
-See below a variation with integration targets externally and on-premises. Different flavors of secure FTP illustrate the communication path.
+The following diagram shows a variation of this scenario with integration targets externally and on-premises. Different types of secure FTP illustrate the communication path.
:::image type="content" source="media/expose-sap-process-orchestration-on-azure/file-azure-firewall-4.png" alt-text="Diagram that shows a file-based scenario with on-premises file share and external party using SAP Process Orchestration on Azure.":::
-For more information, see the [Azure Files docs](../../../storage/files/files-nfs-protocol.md) for insights into NFS file shares as alternative to Blob Storage.
+For insights into Network File System (NFS) file shares as an alternative to Blob Storage, see [NFS file shares in Azure Files](../../../storage/files/files-nfs-protocol.md).
-## Scenario 5: SAP RISE specific
+## Scenario: SAP RISE specific
-SAP RISE deployments are technically identical to the scenarios described before with the exception that the target SAP workload is managed by SAP itself. The concepts described can be applied here as well.
+SAP RISE deployments are technically identical to the scenarios described earlier, with the exception that SAP itself manages the target SAP workload. The described concepts can be applied here.
-Below diagrams describe two different setups as examples. For more information, see our [SAP RISE reference guide](../../../virtual-machines/workloads/sap/sap-rise-integration.md#virtual-network-peering-with-sap-riseecs).
+The following diagrams show two setups as examples. For more information, see the [SAP RISE reference guide](../../../virtual-machines/workloads/sap/sap-rise-integration.md#virtual-network-peering-with-sap-riseecs).
> [!IMPORTANT]
-> Contact SAP to ensure communications ports for your scenario are allowed and opened in Network Security Groups.
+> Contact SAP to ensure that communication ports for your scenario are allowed and opened in NSGs.
-### Scenario 5.A: Http inbound
+### HTTP inbound
-In the first setup, the integration layer including "SAP Process Orchestration" and the complete inbound path is governed by the customer. Only the final SAP target runs on the RISE subscription. Communication to the RISE hosted workload is configured through virtual network peering - typically over the hub. A potential integration could be iDocs posted to the SAP ERP Webservice `/sap/bc/idoc_xml` by an external party.
+In the first setup, the customer governs the integration layer, including SAP Process Orchestration and the complete inbound path. Only the final SAP target runs on the RISE subscription. Communication to the RISE-hosted workload is configured through virtual network peering, typically over the hub. A potential integration could be IDocs posted to the SAP ERP web service `/sap/bc/idoc_xml` by an external party.
:::image type="content" source="media/expose-sap-process-orchestration-on-azure/rise-5a.png" alt-text="Diagram that shows an inbound scenario with Azure API Management and self-hosted SAP Process Orchestration on Azure in the RISE context.":::
-This second example shows a setup, where SAP RISE runs the whole integration chain except for the API Management layer.
+This second example shows a setup where SAP RISE runs the whole integration chain, except for the API Management layer.
:::image type="content" source="media/expose-sap-process-orchestration-on-azure/rise-api-management-5a.png" alt-text="Diagram that shows an inbound scenario with Azure API Management and SAP-hosted SAP Process Orchestration on Azure in the RISE context.":::
-### Scenario 5.B: File outbound
+### File outbound
-In this scenario, the SAP-managed "Process Orchestration" instance writes files to the customer managed file share on Azure or to a workload sitting on-premises. The breakout needs to be handled by the customer.
+In this scenario, the SAP-managed Process Orchestration instance writes files to the customer-managed file share on Azure or to a workload sitting on-premises. The customer handles the breakout.
> [!NOTE]
-> At the time of writing this article the [Azure Blob Storage SFTP feature](../../../storage/blobs/secure-file-transfer-protocol-support.md) is still in preview.
+> The [Azure Blob Storage SFTP feature](../../../storage/blobs/secure-file-transfer-protocol-support.md) is currently in preview.
:::image type="content" source="media/expose-sap-process-orchestration-on-azure/rise-5b.png" alt-text="Diagram that shows a file share scenario with SAP Process Orchestration on Azure in the RISE context.":::
-## Comparing gateway setups
+## Comparison of gateway setups
> [!NOTE]
-> Performance and cost metrics assume production grade tiers. For more information, see the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) and Azure docs for [Azure Firewall](../../../firewall/firewall-performance.md), [Azure Application Gateway (incl. Web Application Firewall - WAF)](../../../application-gateway/high-traffic-support.md), and [Azure API Management](../../../api-management/api-management-capacity.md).
+> Performance and cost metrics assume production-grade tiers. For more information, see the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). Also see the following articles: [Azure Firewall performance](../../../firewall/firewall-performance.md), [Application Gateway high-traffic support](../../../application-gateway/high-traffic-support.md), and [Capacity of an Azure API Management instance](../../../api-management/api-management-capacity.md).
-Depending on the integration protocols required you may need multiple components. Find more details about the benefits of the various combinations of chaining Azure Application Gateway with Azure Firewall [here](/azure/architecture/example-scenario/gateway/firewall-application-gateway#application-gateway-before-firewall).
+Depending on the integration protocols you're using, you might need multiple components. For more information about the benefits of the various combinations of chaining Azure Application Gateway with Azure Firewall, see [Azure Firewall and Application Gateway for virtual networks](/azure/architecture/example-scenario/gateway/firewall-application-gateway#application-gateway-before-firewall).
## Integration rule of thumb
-Which integration flavor described in this article fits your requirements best, needs to be evaluated on a case-by-case basis. Consider enabling the following capabilities:
+To determine which integration scenarios described in this article best fit your requirements, evaluate them on a case-by-case basis. Consider enabling the following capabilities:
-- [Request throttling](../../../api-management/api-management-sample-flexible-throttling.md) using API Management
+- [Request throttling](../../../api-management/api-management-sample-flexible-throttling.md) by using API Management
-- [Concurrent request limits](https://help.sap.com/docs/ABAP_PLATFORM/683d6a1797a34730a6e005d1e8de6f22/3a450194bf9c4797afb6e21b4b22ad2a.html) on the SAP WebDispatcher
+- [Concurrent request limits](https://help.sap.com/docs/ABAP_PLATFORM/683d6a1797a34730a6e005d1e8de6f22/3a450194bf9c4797afb6e21b4b22ad2a.html) on SAP Web Dispatcher
-- [Mutual TLS](../../../application-gateway/mutual-authentication-overview.md) to verify client and receiver
+- [Mutual TLS](../../../application-gateway/mutual-authentication-overview.md) to verify the client and the receiver
-- Web Application Firewall and [re-encrypt after TLS-termination](../../../application-gateway/ssl-overview.md)
+- WAF and [re-encryption after TLS termination](../../../application-gateway/ssl-overview.md)
-- A [Firewall](../../../firewall/features.md) for `non-http` integrations
+- [Azure Firewall](../../../firewall/features.md) for non-HTTP integrations
-- [High-availability](../../../virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios.md) and [disaster recovery](/azure/cloud-adoption-framework/scenarios/sap/eslz-business-continuity-and-disaster-recovery) for the VM-based SAP integration workloads
+- [High availability](../../../virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios.md) and [disaster recovery](/azure/cloud-adoption-framework/scenarios/sap/eslz-business-continuity-and-disaster-recovery) for VM-based SAP integration workloads
-- Modern [authentication mechanisms like OAuth2](../../../api-management/sap-api.md#production-considerations) where applicable
+- Modern [authentication mechanisms like OAuth2](../../../api-management/sap-api.md#production-considerations), where applicable
-- Utilize a managed key store like [Azure Key Vault](../../../key-vault/general/overview.md) for all involved credentials, certificates, and keys
+- A managed key store like [Azure Key Vault](../../../key-vault/general/overview.md) for all involved credentials, certificates, and keys
## Alternatives to SAP Process Orchestration with Azure Integration Services
-The integration scenarios covered by SAP Process Orchestration can be natively addressed with the [Azure Integration Service portfolio](https://azure.microsoft.com/product-categories/integration/). Have a look at the [Azure Logic Apps connectors](../../../logic-apps/logic-apps-using-sap-connector.md) for your desired SAP interfaces to get started. The connector guide contains more details for [AS2](../../../logic-apps/logic-apps-enterprise-integration-as2.md), [EDIFACT](../../../logic-apps/logic-apps-enterprise-integration-edifact.md) etc. too. See [this blog series](https://blogs.sap.com/2022/08/30/port-your-legacy-sap-middleware-flows-to-cloud-native-paas-solutions/) for insights on how to design SAP iFlow patterns with cloud-native means.
+With the [Azure Integration Services portfolio](https://azure.microsoft.com/product-categories/integration/), you can natively address the integration scenarios that SAP Process Orchestration covers. For insights on how to design SAP IFlow patterns through cloud-native means, see [this blog series](https://blogs.sap.com/2022/08/30/port-your-legacy-sap-middleware-flows-to-cloud-native-paas-solutions/). The connector guide contains more details about [AS2](../../../logic-apps/logic-apps-enterprise-integration-as2.md) and [EDIFACT](../../../logic-apps/logic-apps-enterprise-integration-edifact.md).
+
+For more information, view the [Azure Logic Apps connectors](../../../logic-apps/logic-apps-using-sap-connector.md) for your desired SAP interfaces.
## Next steps
The integration scenarios covered by SAP Process Orchestration can be natively a
[Integrate API Management in an internal virtual network with Application Gateway](../../../api-management/api-management-howto-integrate-internal-vnet-appgateway.md)
-[Deploy the Application Gateway WAF triage workbook to better understand SAP related WAF alerts](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20WAF/Workbook%20-%20AppGw%20WAF%20Triage%20Workbook)
+[Deploy the Application Gateway WAF triage workbook to better understand SAP-related WAF alerts](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20WAF/Workbook%20-%20AppGw%20WAF%20Triage%20Workbook)
-[Understand Azure Application Gateway and Web Application Firewall for SAP](https://blogs.sap.com/2020/12/03/sap-on-azure-application-gateway-web-application-firewall-waf-v2-setup-for-internet-facing-sap-fiori-apps/)
+[Understand the Application Gateway WAF for SAP](https://blogs.sap.com/2020/12/03/sap-on-azure-application-gateway-web-application-firewall-waf-v2-setup-for-internet-facing-sap-fiori-apps/)
-[Understand implication of combining Azure Firewall and Azure Application Gateway](/azure/architecture/example-scenario/gateway/firewall-application-gateway#application-gateway-before-firewall)
+[Understand implications of combining Azure Firewall and Azure Application Gateway](/azure/architecture/example-scenario/gateway/firewall-application-gateway#application-gateway-before-firewall)
-[Work with SAP OData APIs in Azure API Management](../../../api-management/sap-api.md)
+[Work with SAP OData APIs in Azure API Management](../../../api-management/sap-api.md)
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 09/27/2022 Last updated : 09/29/2022
Changes to documents in the SAP on Azure workload section are listed at the [end
If you have specific questions, we are going to point you to specific documents or flows in this section of the start page. You want to know: -- What Azure VMs and HANA Large Instance units are supported for which SAP software releases and which operating system versions. Read the document [What SAP software is supported for Azure deployment](./sap-supported-product-on-azure.md) for answers and the process to find the information-- What SAP deployment scenarios are supported with Azure VMs and HANA Large Instances. Information about the supported scenarios can be found in the documents:
- - [SAP workload on Azure virtual machine supported scenarios](./sap-planning-supported-configurations.md)
- - [Supported scenarios for HANA Large Instance](./hana-supported-scenario.md)
+- Is Azure accepting new customers for HANA Large Instances? HANA Large Instance service is in sunset mode and doesn't accept new customers anymore. Providing units for existing HANA Large Instance customers is still possible. For alternatives, check the offers of HANA certified Azure VMs in the [HANA Hardware Directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:24).
+- Can Azure Active Directory acounts be used to run the SAP ABAP stack in Windows guest OS. No, due to shortcomings in feature set of AAD, it can not be used for running the ABAP stack within the Windows guest OS
- What Azure Services, Azure VM types and Azure storage services are available in the different Azure regions, check the site [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) - Are third-party HA frameworks, besides Windows and Pacemaker supported? Check bottom part of [SAP support note #1928533](https://launchpad.support.sap.com/#/notes/1928533) - What Azure storage is best for my scenario? Read [Azure Storage types for SAP workload](./planning-guide-storage.md)
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- September 29, 2022: Announ cing HANA Large Instances being in sunset mode in [SAP workload on Azure virtual machine supported scenarios](./sap-planning-supported-configurations.md) and [What is SAP HANA on Azure (Large Instances)?](./hana-overview-architecture.md). Adding some statements around Azure VMware and Azure Active Directory support status in [SAP workload on Azure virtual machine supported scenarios](./sap-planning-supported-configurations.md)
- September 27, 2022: Minor changes in [HA for SAP ASCS/ERS with NFS simple mount](./high-availability-guide-suse-nfs-simple-mount.md) on SLES 15 for SAP Applications to adjust mount instructions - September 14, 2022 Release of updated SAP on Oracle guide with new and updated content [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms_guide_oracle.md) - September 8, 2022: Change in [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md) to add instructions for deploying /hana/shared (only) on NFS on Azure Files
virtual-machines Hana Overview Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-overview-architecture.md
vm-linux Previously updated : 01/04/2021 Last updated : 09/28/2022 # What is SAP HANA on Azure (Large Instances)?
+> [!NOTE]
+> HANA Large Instance service is in sunset mode and does not accept new customers anymore. Providing units for existing HANA Large Instance customers is still possible. For alternatives, please check the offers of HANA certified Azure VMs in the [HANA Hardware Directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:24).
+ SAP HANA on Azure (Large Instances) is a unique solution to Azure. In addition to providing virtual machines for deploying and running SAP HANA, Azure offers you the possibility to run and deploy SAP HANA on bare-metal servers that are dedicated to you. The SAP HANA on Azure (Large Instances) solution builds on non-shared host/server bare-metal hardware that is assigned to you. The server hardware is embedded in larger stamps that contain compute/server, networking, and storage infrastructure. SAP HANA on Azure (Large Instances) offers different server SKUs or sizes. Units can have 36 Intel CPU cores and 768 GB of memory and go up to units that have up to 480 Intel CPU cores and up to 24 TB of memory. The customer isolation within the infrastructure stamp is performed in tenants, which looks like:
virtual-machines Sap Planning Supported Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-planning-supported-configurations.md
ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538
Previously updated : 02/11/2022 Last updated : 09/28/2022 # SAP workload on Azure virtual machine supported scenarios
-Designing SAP NetWeaver, Business one, `Hybris` or S/4HANA systems architecture in Azure opens many different opportunities for various architectures and tools to use to get to a scalable, efficient, and highly available deployment. Though dependent on the operating system or DBMS used, there are restrictions. Also, not all scenarios that are supported on-premises are supported in the same way in Azure. This document will lead through the supported non-high-availability configurations and high-availability configurations and architectures using Azure VMs exclusively. For scenarios supported with [HANA Large Instances](./hana-overview-architecture.md), check the article [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md).
+Designing SAP NetWeaver, Business one, `Hybris` or S/4HANA systems architecture in Azure opens many different opportunities for various architectures and tools to use to get to a scalable, efficient, and highly available deployment. Though dependent on the operating system or DBMS used, there are restrictions. Also, not all scenarios that are supported on-premisess are supported in the same way in Azure. This document will lead through the supported non-high-availability configurations and high-availability configurations and architectures using Azure VMs exclusively.
+
+> [!NOTE]
+> HANA Large Instance service is in sunset mode and doesn't accept new customers anymore. Providing units for existing HANA Large Instance customers is still possible. For alternatives, check the offers of HANA certified Azure VMs in the [HANA Hardware Directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=iaas;ve:24). For scenarios that were and still are supported for existing HANA Large Instance customers with [HANA Large Instances](./hana-overview-architecture.md), check the article [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md).
+
+## General platform restrictions
+Azure has various platforms besides so called native Azure VMs that are offered as first party service. [HANA Large Instances](./hana-overview-architecture.md), which is in sunset mode is one of those platforms. [Azure VMware Services](https://azure.microsoft.com/products/azure-VMware/) is another of these first party services. At this point in time Azure VMware Services in general isn't supported by SAP for hosting SAP workload. Refer to [SAP support note #2138865 - SAP Applications on VMware Cloud: Supported Products and VM configurations](https://launchpad.support.sap.com/#/notes/2138865) for more details of VMware support on different platforms.
+
+Besides the on-premises Active Directory, Azure offers a managed Active Directory SaaS service with [Azure Active Directory Domain Services](https://learn.microsoft.com/azure/active-directory-domain-services/overview) and [Azure Active Directory](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-whatis). SAP components hosted on Windows OS that are supossed to use Active directory, are solely relying on the traditional Active Directory as it's hosted on-premises by you, or Azure Active Directory Domain Services. But these SAP components can't function with the native Azure Active Directory. Reason is that there are still larger gaps in functionality between Active Directory in its on-premises form or its SaaS form (Azure Active Directory Domain Services) and the native Azure Active Directory. This is the reason why Azure Active Directory accounts aren't supported for running SAP components, like ABAP stack, JAVA stack on Windows OS. Traditional Active Directory accounts need to be used in such scenarios.
## 2-Tier configuration
-An SAP 2-Tier configuration is considered to be built up out of a combined layer of the SAP DBMS and application layer that run on the same server or VM unit. The second tier is considered to be the user interface layer. In the case of a 2-Tier configuration, the DBMS, and SAP application layer share the resources of the Azure VM. As a result, you need to configure the different components in a way that these components don't compete for resources. You also need to be careful to not oversubscribe the resources of the VM. Such a configuration does not provide any high availability, beyond the [Azure Service Level agreements](https://azure.microsoft.com/support/legal/sla/) of the different Azure components involved.
+An SAP 2-Tier configuration is considered to be built up out of a combined layer of the SAP DBMS and application layer that run on the same server or VM unit. The second tier is considered to be the user interface layer. In the case of a 2-Tier configuration, the DBMS, and SAP application layer share the resources of the Azure VM. As a result, you need to configure the different components in a way that these components don't compete for resources. You also need to be careful to not oversubscribe the resources of the VM. Such a configuration doesn't provide any high availability, beyond the [Azure Service Level agreements](https://azure.microsoft.com/support/legal/sla/) of the different Azure components involved.
A graphical representation of such a configuration can look like: ![Simple 2-Tier configuration](./media/sap-planning-supported-configurations/two-tier-simple-configuration.png)
-Such configurations are supported with Windows, Red Hat, SUSE, and Oracle Linux for the DBMS systems of SQL Server, Oracle, Db2, maxDB, and SAP ASE for production and non-production cases. For SAP HANA as DBMS, such type of configurations is supported for non-production cases only. This includes the deployment case of [Azure HANA Large Instances](./hana-overview-architecture.md) as well.
-For all OS/DBMS combinations supported on Azure, this type of configuration is supported. However, it is mandatory that you set the configuration of the DBMS and the SAP components in a way that DBMS and SAP components don't compete for memory and CPU resources and thereby exceed the physical available resources. This needs to be done by restricting the memory the DBMS is allowed to allocate. You also need to limit the SAP Extended Memory on application instances. You also need to monitor CPU consumption of the VM overall to make sure that the components are not maximizing the CPU resources.
+Such configurations are supported with Windows, Red Hat, SUSE, and Oracle Linux for the DBMS systems of SQL Server, Oracle, Db2, maxDB, and SAP ASE for production and non-production cases. For SAP HANA as DBMS, such type of configurations is supported for non-production cases only. This restriction includes the deployment case of [Azure HANA Large Instances](./hana-overview-architecture.md) as well.
+For all OS/DBMS combinations supported on Azure, this type of configuration is supported. However, it's mandatory that you set the configuration of the DBMS and the SAP components in a way that DBMS and SAP components don't compete for memory and CPU resources and with that exceed the physical available resources. This needs to be done by restricting the memory the DBMS is allowed to allocate. You also need to limit the SAP Extended Memory on application instances. You also need to monitor CPU consumption of the VM overall to make sure that the components aren't maximizing the CPU resources.
> [!NOTE] > For production SAP systems, we recommend additional high availability and eventual disaster recovery configurations as described later in this document ## 3-Tier configuration
-In such configurations, you separate the SAP application layer and the DBMS layer into different VMs. You usually do that for larger systems and out of reasons of being more flexible on the resources of the SAP application layer. In the most simple setup, there is no high availability beyond the [Azure Service Level agreements](https://azure.microsoft.com/support/legal/sla/) of the different Azure components involved.
+In such configurations, you separate the SAP application layer and the DBMS layer into different VMs. You usually do that for larger systems and out of reasons of being more flexible on the resources of the SAP application layer. In the most simple setup, There's no high availability beyond the [Azure Service Level agreements](https://azure.microsoft.com/support/legal/sla/) of the different Azure components involved.
The graphical representation looks like: ![Diagram that shows a simple 3-Tier configuration.](./media/sap-planning-supported-configurations/three-tier-simple-configuration.png)
-This type of configuration is supported on Windows, Red Hat, SUSE, and Oracle Linux for the DBMS systems of SQL Server, Oracle, Db2, SAP HANA, maxDB, and SAP ASE for production and non-production cases. This is the default deployment configuration for [Azure HANA Large Instances](./hana-overview-architecture.md). For simplification, we did not distinguish between SAP Central Services and SAP dialog instances in the SAP application layer. In this simple 3-Tier configuration, there would be no high availability protection for SAP Central Services.
+This type of configuration is supported on Windows, Red Hat, SUSE, and Oracle Linux for the DBMS systems of SQL Server, Oracle, Db2, SAP HANA, maxDB, and SAP ASE for production and non-production cases. For simplification, we didn't distinguish between SAP Central Services and SAP dialog instances in the SAP application layer. In this simple 3-Tier configuration, there would be no high availability protection for SAP Central Services.
> [!NOTE] > For production SAP systems, we recommend additional high availability and eventual disaster recovery configurations as described later in this document
This type of DBMS deployment is supported for:
- For SAP HANA, multiple instances on one VM, SAP calls this deployment method MCOS, is supported. For details see the SAP article [Multiple SAP HANA Systems on One Host (MCOS)](https://help.sap.com/viewer/eb3777d5495d46c5b2fa773206bbfb46/2.0.02/ - /b2751fd43bec41a9a14e01913f1edf18.html)
-Running multiple database instances on one host, you need to make sure that the different instances are not competing for resources and thereby exceed the physical resource limits of the VM. This is especially true for memory where you need to cap the memory anyone of the instances sharing the VM can allocate. That also might be true for the CPU resources the different database instances can consume. All the DBMS mentioned have configurations that allow limiting memory allocation and CPU resources on an instance level.
-In order to have support for such a configuration for Azure VMs, it is expected that the disks or volumes that are used for the data and log/redo log files of the databases managed by the different instances are separate. Or in other words data or log/redo log files of databases managed by different DBMS instance are not supposed to share the same disks or volumes.
+Running multiple database instances on one host, you need to make sure that the different instances aren't competing for resources and thereby exceed the physical resource limits of the VM. This is especially true for memory where you need to cap the memory anyone of the instances sharing the VM can allocate. That also might be true for the CPU resources the different database instances can consume. All the DBMS mentioned have configurations that allow limiting memory allocation and CPU resources on an instance level.
+In order to have support for such a configuration for Azure VMs, it's expected that the disks or volumes that are used for the data and log/redo log files of the databases managed by the different instances are separate. Or in other words data or log/redo log files of databases managed by different DBMS instance aren't supossed to share the same disks or volumes.
The disk configuration for HANA Large Instances is delivered configured and is detailed in [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md#single-node-mcos). > [!NOTE]
-> For production SAP systems, we recommend additional high availability and eventual disaster recovery configurations as described later in this document. VMs with multiple DBMS instances are not supported with the high availability configurations described later in this document.
+> For production SAP systems, we recommend additional high availability and eventual disaster recovery configurations as described later in this document. VMs with multiple DBMS instances aren't supported with the high availability configurations described later in this document.
## Multiple SAP Dialog instances in one VM
-In many cases, multiple dialog instances got deployed on bare metal servers or even in VMs running in private clouds. Reason for such configurations was to tailor certain SAP dialog instances to certain workload, business functionality, or workload types. Reason for not isolating those instances into separate VMs was the effort of operating system maintenance and operations. Or in numerous cases the costs in case the hoster or operator of the VM is asking for a monthly fee per VM operated and administrated. In Azure, a scenario of hosting multiple SAP dialog instances within a single VM us supported for production and non-production purposes on the operating systems of Windows, Red Hat, SUSE, and Oracle Linux. The SAP kernel parameter PHYS_MEMSIZE, available on Windows and modern Linux kernels, should be set if multiple SAP Application Server instances are running on a single VM. It is also advised to limit the expansion of SAP Extended Memory on operating systems, like Windows where automatic growth of the SAP extended Memory is implemented. This can be done with the SAP profile parameter `em/max_size_MB`.
+In many cases, multiple dialog instances got deployed on bare metal servers or even in VMs running in private clouds. Reason for such configurations was to tailor certain SAP dialog instances to certain workload, business functionality, or workload types. Reason for not isolating those instances into separate VMs was the effort of operating system maintenance and operations. Or in numerous cases the costs in case the hoster or operator of the VM is asking for a monthly fee per VM operated and administrated. In Azure, a scenario of hosting multiple SAP dialog instances within a single VM us supported for production and non-production purposes on the operating systems of Windows, Red Hat, SUSE, and Oracle Linux. The SAP kernel parameter PHYS_MEMSIZE, available on Windows and modern Linux kernels, should be set if multiple SAP Application Server instances are running on a single VM. it's also advised limiting the expansion of SAP Extended Memory on operating systems, like Windows where automatic growth of the SAP extended Memory is implemented. This can be done with the SAP profile parameter `em/max_size_MB`.
At 3-Tier configuration where multiple SAP dialog instances are run within Azure VMs can look like: ![Diagram that shows a 3-Tier configuration where multiple SAP dialog instances are run within Azure VMs.](./media/sap-planning-supported-configurations/multiple-dialog-instances.png)
-For simplification, we did not distinguish between SAP Central Services and SAP dialog instances in the SAP application layer. In this simple 3-Tier configuration, there would be no high availability protection for SAP Central Services. For production systems, it is not recommended to leave SAP Central Services unprotected. For specifics on so called multi-SID configurations around SAP Central Instances and high-availability of such multi-SID configurations, see later sections of this document.
+For simplification, we didn't distinguish between SAP Central Services and SAP dialog instances in the SAP application layer. In this simple 3-Tier configuration, there would be no high availability protection for SAP Central Services. For production systems, it's not recommended to leave SAP Central Services unprotected. For specifics on so called multi-SID configurations around SAP Central Instances and high-availability of such multi-SID configurations, see later sections of this document.
## High Availability protection for the SAP DBMS layer
-As you look to deploy SAP production systems, you need to consider hot standby type of high availability configurations. Especially with SAP HANA, where data needs to be loaded into memory before being able to get the full performance and scalability back, Azure service healing is not an ideal measure for high availability.
+As you look to deploy SAP production systems, you need to consider hot standby type of high availability configurations. Especially with SAP HANA, where data needs to be loaded into memory before being able to get the full performance and scalability back, Azure service healing isn't an ideal measure for high availability.
-In general, Microsoft supports only high availability configurations and software packages that are described in the [SAP workload scenarios](./get-started.md). You can read the same statement in SAP note [#1928533](https://launchpad.support.sap.com/#/notes/1928533). Microsoft will not provide support for other high availability third-party software frameworks that are not documented by Microsoft with SAP workload. In such cases, the third-party supplier of the high availability framework is the supporting party for the high availability configuration who needs to be engaged by you as a customer into the support process. Exceptions are going to be mentioned in this article.
+In general, Microsoft supports only high availability configurations and software packages that are described in the [SAP workload scenarios](./get-started.md). You can read the same statement in SAP note [#1928533](https://launchpad.support.sap.com/#/notes/1928533). Microsoft will not provide support for other high availability third-party software frameworks that aren't documented by Microsoft with SAP workload. In such cases, the third-party supplier of the high availability framework is the supporting party for the high availability configuration who needs to be engaged by you as a customer into the support process. Exceptions are going to be mentioned in this article.
-In general Microsoft supports a limited set of high availability configurations on Azure VMs or HANA Large Instances units. For the supported scenarios of HANA Large Instances, read the document [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md).
+In general, Microsoft supports a limited set of high availability configurations on Azure VMs or HANA Large Instances units. For the supported scenarios of HANA Large Instances, read the document [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md).
For Azure VMs, the following high availability configurations are supported on DBMS level:
For Azure VMs, the following high availability configurations are supported on D
- [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on Red Hat Enterprise Linux](./sap-hana-scale-out-standby-netapp-files-rhel.md) - SQL Server Failover cluster based on Windows Scale-Out File Services. Though recommendation for production systems is to use SQL Server Always On instead of clustering. SQL Server Always On provides better availability using separate storage. Details are described in this article: - [Configure a SQL Server failover cluster instance on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/failover-cluster-instance-storage-spaces-direct-manually-configure)-- SQL Server Always On is supported with the Windows operating system for SQL Server on Azure. This is the default recommendation for production SQL Server instances on Azure. Details are described in these articles:
+- SQL Server Always On is supported with the Windows operating system for SQL Server on Azure. This configuration is the default recommendation for production SQL Server instances on Azure. Details are described in these articles:
- [Introducing SQL Server Always On availability groups on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/availability-group-overview). - [Configure an Always On availability group on Azure virtual machines in different regions](/azure/azure-sql/virtual-machines/windows/availability-group-manually-configure-multiple-regions). - [Configure a load balancer for an Always On availability group in Azure](/azure/azure-sql/virtual-machines/windows/availability-group-load-balancer-portal-configure).
For Azure VMs, the following high availability configurations are supported on D
> [!IMPORTANT] > For none of the scenarios described above, we support configurations of multiple DBMS instances in one VM. Means in each of the cases, only one database instance can be deployed per VM and protected with the described high availability methods. Protecting multiple DBMS instances under the same Windows or Pacemaker failover cluster is **NOT** supported at this point in time. Also Oracle Data Guard is supported for single instance per VM deployment cases only.
-Various database systems allow hosting multiple databases under one DBMS instance. As in the case of SAP HANA, multiple databases can be hosted in multiple database containers (MDC). For cases where these multi-database configurations are working within one failover cluster resource, these configurations are supported. Configurations that are not supported are cases where multiple cluster resources would be required. As for configurations where you would define multiple SQL Server Availability Groups, under one SQL Server instance.
+Various database systems allow hosting multiple databases under one DBMS instance. As in the case of SAP HANA, multiple databases can be hosted in multiple database containers (MDC). For cases where these multi-database configurations are working within one failover cluster resource, these configurations are supported. Configurations that aren't supported are cases where multiple cluster resources would be required. As for configurations where you would define multiple SQL Server Availability Groups, under one SQL Server instance.
![DBMS HA configuration](./media/sap-planning-supported-configurations/database-high-availability-configuration.png) Dependent on the DBMS an/or operating systems, components like Azure load balancer might or might not be required as part of the solution architecture.
-Specifically for maxDB, the storage configuration needs to be different. In maxDB the data and log files needs to be located on shared storage for high availability configurations. Only in the case of maxDB, shared storage is supported for high availability. For all other DBMS, separate storage stacks per node are the only supported disk configurations.
+Specifically for maxDB, the storage configuration needs to be different. With maxDB, the data and log files needs to be located on shared storage for high availability configurations. Only for maxDB, shared storage is supported for high availability. For all other DBMS, separate storage stacks per node are the only supported disk configurations.
-Other high availability frameworks are known to exist and are known to run on Microsoft Azure as well. However, Microsoft did not test those frameworks. If you want to build your high availability configuration with those frameworks, you will need to work with the provider of that software to:
+Other high availability frameworks are known to exist and are known to run on Microsoft Azure as well. However, Microsoft didn't test those frameworks. If you want to build your high availability configuration with those frameworks, you will need to work with the provider of that software to:
- Develop a deployment architecture - Deployment of the architecture - Support of the architecture > [!IMPORTANT]
-> Microsoft Azure Marketplace offers a variety of soft appliances that provide storage solutions on top of Azure native storage. These soft appliances can be used to create NFS shares as well that theoretically could be used in the SAP HANA scale-out deployments where a standby node is required. Due to various reasons, none of these storage soft appliances is supported for any of the DBMS deployments by Microsoft and SAP on Azure. Deployments of DBMS on SMB shares is not supported at all at this point in time. Deployments of DBMS on NFS shares is limited to NFS 4.1 shares on [Azure NetApp Files](https://azure.microsoft.com/services/netapp/).
+> Microsoft Azure Marketplace offers a variety of soft appliances that provide storage solutions on top of Azure native storage. These soft appliances can be used to create NFS shares as well that theoretically could be used in the SAP HANA scale-out deployments where a standby node is required. Due to various reasons, none of these storage soft appliances is supported for any of the DBMS deployments by Microsoft and SAP on Azure. Deployments of DBMS on SMB shares isn't supported at all at this point in time. Deployments of DBMS on NFS shares is limited to NFS 4.1 shares on [Azure NetApp Files](https://azure.microsoft.com/services/netapp/).
## High Availability for SAP Central Service
SAP Central Services is a second single point of failure of your SAP configurati
- Pacemaker on Red Hat operating system with NFS share hosted on [Azure NetApp Files](https://azure.microsoft.com/services/netapp/). Details are described in the article - [Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-netapp-files.md)
-Of the listed solutions, you need a support relationship with SIOS to support the `Datakeeper` product and to engage with SIOS directly in case of issues. Dependent on the way you licensed the Windows, Red Hat, and/or SUSE OS, you could also be required to have a support contract with your OS provider to have full support of the listed high availability configurations.
+Of the listed solutions, you need a support relationship with SIOS to support the `Datakeeper` product and to engage with SIOS directly if problems are encountered. Dependent on the way you licensed the Windows, Red Hat, and/or SUSE OS, you could also be required to have a support contract with your OS provider to have full support of the listed high availability configurations.
The configuration can as well be displayed like: ![DBMS and ASCS HA configuration](./media/sap-planning-supported-configurations/high-available-3-tier-configuration.png)
-On the right hand side of the graphics, the highly available SAP Central Services is shown. Besides having the SAP Central services protected with a failover cluster framework that can fail over in case of an issue, there is a necessity for a highly available NFS or SMB share, or a Windows shared disk to make sure the sapmnt and global transport directory are available independent of the existence of a single VM. Additional some of the solutions, like Windows Failover Cluster Server and Pacemaker are going to require an Azure load balancer to direct or re-direct traffic to a healthy node.
+On the right hand side of the graphics, the highly available SAP Central Services is shown. Besides having the SAP Central services protected with a failover cluster framework that can fail over in case of an issue, There's a necessity for a highly available NFS or SMB share, or a Windows shared disk to make sure the sapmnt and global transport directory are available independent of the existence of a single VM. Additional some of the solutions, like Windows Failover Cluster Server and Pacemaker are going to require an Azure load balancer to direct or re-direct traffic to a healthy node.
-In the list shown, there is no mentioning of the Oracle Linux operating system. Oracle Linux does not support Pacemaker as a cluster framework. If you want to deploy your SAP system on Oracle Linux and you need a high availability framework for Oracle Linux, you need to work with third-party suppliers. One of the suppliers is SIOS with their Protection Suite for Linux that is supported by SAP on Azure. For more information read SAP note [#1662610 - Support details for SIOS Protection Suite for Linux](https://launchpad.support.sap.com/#/notes/1662610) for more details.
+In the list shown, There's no mentioning of the Oracle Linux operating system. Oracle Linux doesn't support Pacemaker as a cluster framework. If you want to deploy your SAP system on Oracle Linux and you need a high availability framework for Oracle Linux, you need to work with third-party suppliers. One of the suppliers is SIOS with their Protection Suite for Linux that is supported by SAP on Azure. For more information read SAP note [#1662610 - Support details for SIOS Protection Suite for Linux](https://launchpad.support.sap.com/#/notes/1662610) for more details.
In the list shown, there is no mentioning of the Oracle Linux operating system.
Since only a subset of Azure storage types is providing highly available NFS or SMB shares that quality for the usage in our SAP Central Services cluster scenarios a list of supported storage types - Windows Failover Cluster Server with Windows Scale-out File Server can be deployed on all native Azure storage types, except Azure NetApp Files. However, recommendation is to use Premium Storage due to superior service level agreements in throughput and IOPS.-- Windows Failover Cluster Server with SMB on Azure NetApp Files is supported on Azure NetApp Files. SMB shares on Azure File services are **NOT** supported at this point in time.
+- Windows Failover Cluster Server with SMB on Azure NetApp Files is supported on Azure NetApp Files. SMB shares hosted on Azure Premium File services are supported for this scenario as well. Azure Standard Files isn't supported
- Windows Failover Cluster Server with windows shared disk based on SIOS `Datakeeper` can be deployed on all native Azure storage types, except Azure NetApp Files. However, recommendation is to use Premium Storage due to superior service level agreements in throughput and IOPS.-- SUSE or Red Hat Pacemaker using NFS shares on Azure NetApp Files is supported on Azure NetApp Files.-- SUSE Pacemaker using a `drdb` configuration between two VMs is supported using native Azure storage types, except Azure NetApp Files. However, recommendation is to use Premium Storage due to superior service level agreements in throughput and IOPS.-- Red Hat Pacemaker using `glusterfs` for providing NFS share is supported using native Azure storage types, except Azure NetApp Files. However, recommendation is to use Premium Storage due to superior service level agreements in throughput and IOPS.
+- SUSE or Red Hat Pacemaker using NFS shares on Azure NetApp Files is supported.
+- SUSE or Red Hat Pacemaker using NFS shares on Azure Premium Files using LRS or ZRS s supported. Azure Standard Files isn't supported
+- SUSE Pacemaker using a `drdb` configuration between two VMs is supported using native Azure storage types, except Azure NetApp Files. However, we recommend using one of the first party services with Azure Premium Files or Azure NetApp Files.
+- Red Hat Pacemaker using `glusterfs` for providing NFS share is supported using native Azure storage types, except Azure NetApp Files. However, we recommend using one of the first party services with Azure Premium Files or Azure NetApp Files.
> [!IMPORTANT]
-> Microsoft Azure Marketplace offers a variety of soft appliances that provide storage solutions on top of Azure native storage. These soft appliances can be used to create NFS or SMB shares as well that theoretically could be used in the failover clustered SAP Central Services as well. These solutions are not directly supported for SAP workload by Microsoft. If you decide to use such a solution to create your NFS or SMB share, support for the SAP Central Service configuration needs to be provided by the third-party owning the software in the storage soft appliance.
+> Microsoft Azure Marketplace offers a variety of soft appliances that provide storage solutions on top of Azure native storage. These storage soft appliances can be used to create NFS or SMB shares as well that theoretically could be used in the failover clustered SAP Central Services as well. These solutions aren't directly supported for SAP workload by Microsoft. If you decide to use such a solution to create your NFS or SMB share, support for the SAP Central Service configuration needs to be provided by the third-party owning the software in the storage soft appliance.
## Multi-SID SAP Central Services failover clusters
-To reduce the number of VMs that are needed in large SAP landscapes, SAP allows running SAP Central Services instances of multiple different SAP systems in failover cluster configuration. Imagine cases where you have 30 or more NetWeaver or S/4HANA production systems. Without multi-SID clustering, these configurations would require 60 or more VMs in 30 or more Windows or Pacemaker failover cluster configurations. Besides the DBMS failover clusters necessary. Deploying multiple SAP central services across two nodes in a failover cluster configuration can reduce the number of VMs significantly. However, deploying multiple SAP Central services instances on a single two node cluster configuration also has some disadvantages. Issues around a single VM in the cluster configuration apply to multiple SAP systems. Maintenance on the guest OS running in the cluster configuration requires more coordination since multiple production SAP systems are affected. Tools like SAP LaMa are not supporting multi-SID clustering in their system cloning process.
+To reduce the number of VMs that are needed in large SAP landscapes, SAP allows running SAP Central Services instances of multiple different SAP systems in failover cluster configuration. Imagine cases where you've 30 or more NetWeaver or S/4HANA production systems. Without multi-SID clustering, these configurations would require 60 or more VMs in 30 or more Windows or Pacemaker failover cluster configurations. Deploying multiple SAP central services across two nodes in a failover cluster configuration can reduce the number of VMs significantly. However, deploying multiple SAP Central services instances on a single two node cluster configuration also has some disadvantages. Issues around a single VM in the cluster configuration apply to multiple SAP systems. Maintenance on the guest OS running in the cluster configuration requires more coordination since multiple production SAP systems are affected. Tools like SAP LaMa aren't supporting multi-SID clustering in their system cloning process.
-On Azure, a multi-SID cluster configuration is supported for the Windows operating system with ENSA1 and ENSA2. Recommendation is not to combine the older Enqueue Replication Service architecture (ENSA1) with the new architecture (ENSA2) on one multi-SID cluster. Details about such an architecture are documented in the articles
+On Azure, a multi-SID cluster configuration is supported for the Windows operating system with ENSA1 and ENSA2. Recommendation isn't to combine the older Enqueue Replication Service architecture (ENSA1) with the new architecture (ENSA2) on one multi-SID cluster. Details about such an architecture are documented in the articles
- [SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and shared disk on Azure](./sap-ascs-ha-multi-sid-wsfc-shared-disk.md) - [SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and file share on Azure](./sap-ascs-ha-multi-sid-wsfc-file-share.md)
For details of HANA Large Instances supported HANA scale-out configurations, the
## Disaster Recovery Scenario
-There is a variety of disaster recovery scenarios that are supported. We define Disaster architectures as architectures, which should compensate for a complete Azure region going off the grid. This means we need the disaster recovery target to be a different Azure region as target to run your SAP landscape. We separate methods and configurations in DBMS layer and non-DBMS layer.
+There's a variety of disaster recovery scenarios that are supported. We define Disaster architectures as architectures, which should compensate for a complete Azure region going off the grid. This means we need the disaster recovery target to be a different Azure region as target to run your SAP landscape. We separate methods and configurations in DBMS layer and non-DBMS layer.
### DBMS layer
-For the DBMS layer, configurations using the DBMS native replication mechanisms, like Always On, Oracle Data Guard, Db2 HADR, SAP ASE Always-On, or HANA System Replication are supported. It is mandatory that the replication stream in such cases is asynchronous, instead of synchronous as in typical high availability scenarios that are deployed within a single Azure region. A typical example of such a supported DBMS disaster recovery configuration is described in the article [SAP HANA availability across Azure regions](./sap-hana-availability-across-regions.md#combine-availability-within-one-region-and-across-regions). The second graphic in that section describes a scenario with HANA as an example. The main databases supported for SAP applications are all able to be deployed in such a scenario.
+For the DBMS layer, configurations using the DBMS native replication mechanisms, like Always On, Oracle Data Guard, Db2 HADR, SAP ASE Always-On, or HANA System Replication are supported. it's mandatory that the replication stream in such cases is asynchronous, instead of synchronous as in typical high availability scenarios that are deployed within a single Azure region. A typical example of such a supported DBMS disaster recovery configuration is described in the article [SAP HANA availability across Azure regions](./sap-hana-availability-across-regions.md#combine-availability-within-one-region-and-across-regions). The second graphic in that section describes a scenario with HANA as an example. The main databases supported for SAP applications are all able to be deployed in such a scenario.
-It is supported to use a smaller VM as target instance in the disaster recovery region since that VM does not experience the full workload traffic. Doing so, you need to keep the following considerations in mind:
+it's supported to use a smaller VM as target instance in the disaster recovery region since that VM doesn't experience the full workload traffic. Doing so, you need to keep the following considerations in mind:
-- Smaller VM types do not allow that many disks attached than smaller VMs
+- Smaller VM types don't allow that many disks attached than smaller VMs
- Smaller VMs have less network and storage throughput - Re-sizing across VM families can be a problem when the Different VMs are collected in one Azure Availability Set or when the re-sizing should happen between the M-Series family and Mv2 family of VMs - CPU and memory consumption for the database instance being able to receive the stream of changes with minimal delay and enough CPU and memory resources to apply these changes with minimal delay to the data More details on limitations of different VM sizes can be found on the [VM sizes](../../sizes.md) page
-Another supported method of deploying a DR target is to have a second DBMS instance installed on a VM that runs a non-production DBMS instance of a non-production SAP instance. This can be a bit more challenging since you need to figure out what on memory, CPU resources, network bandwidth, and storage bandwidth is needed for the particular target instances that should function as main instance in the DR scenario. Especially in HANA it is highly recommended that you are configuring the instance that functions as DR target on a shared host so that the data is not pre-loaded into the DR target instance.
+Another supported method of deploying a DR target is to have a second DBMS instance installed on a VM that runs a non-production DBMS instance of a non-production SAP instance. This can be a bit more challenging since you need to figure out what on memory, CPU resources, network bandwidth, and storage bandwidth is needed for the particular target instances that should function as main instance in the DR scenario. Especially in HANA it's highly recommended that you're configuring the instance that functions as DR target on a shared host so that the data isn't pre-loaded into the DR target instance.
For HANA Large Instance DR scenarios check these documents:
For HANA Large Instance DR scenarios check these documents:
- [Scale-out with DR using HSR](./hana-supported-scenario.md#scale-out-with-dr-using-hsr) > [!NOTE]
-> Usage of [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) has not been tested for DBMS deployments under SAP workload. As a result it is not supported for the DBMS layer of SAP systems at this point in time. Other methods of replications by Microsoft and SAP that are not listed are not supported. Using third party software for replicating the DBMS layer of SAP systems between different Azure Regions, needs to be supported by the vendor of the software and will not be supported through Microsoft and SAP support channels.
+> Usage of [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) has not been tested for DBMS deployments under SAP workload. As a result it's not supported for the DBMS layer of SAP systems at this point in time. Other methods of replications by Microsoft and SAP that aren't listed aren't supported. Using third party software for replicating the DBMS layer of SAP systems between different Azure Regions, needs to be supported by the vendor of the software and will not be supported through Microsoft and SAP support channels.
## Non-DBMS layer For the SAP application layer and eventual shares or storage locations that are needed, the two major scenarios are leveraged by customers: -- The disaster recovery targets in the second Azure region are not being used for any production or non-production purposes. In this scenario, the VMs that function as disaster recovery target are ideally not deployed and the image and changes to the images of the production SAP application layer is replicated to the disaster recovery region. A functionality that can perform such a task is [Azure Site Recovery](../../../site-recovery/azure-to-azure-move-overview.md). Azure Site Recovery support an Azure-to-Azure replication scenario like this.
+- The disaster recovery targets in the second Azure region aren't being used for any production or non-production purposes. In this scenario, the VMs that function as disaster recovery target are ideally not deployed and the image and changes to the images of the production SAP application layer is replicated to the disaster recovery region. A functionality that can perform such a task is [Azure Site Recovery](../../../site-recovery/azure-to-azure-move-overview.md). Azure Site Recovery support an Azure-to-Azure replication scenario like this.
- The disaster recovery targets are VMs that are actually in use by non-production systems. The whole SAP landscape is spread across two different Azure regions with production systems usually in one region and non-production systems in another region. In many customer deployments, the customer has a non-production system that is equivalent to a production system. The customer has production application instances pre-installed on the application layer non-production systems. In case of a failover, the non-production instances would be shut down, the virtual names of the production VMs moved to the non-production VMs (after assigning new IP addresses in DNS), and the pre-installed production instances are getting started ### SAP Central Services clusters
SAP Central Services clusters that are using shared disks (Windows), SMB shares
## Non-supported scenario
-There is a list of scenarios, which are not supported for SAP workload on Azure architectures. **Not supported** means SAP and Microsoft will not be able to support these configurations and need to defer to an eventual involved third-party that provided software to establish such architectures. Two of the categories are:
+There's a list of scenarios, which aren't supported for SAP workload on Azure architectures. **Not supported** means SAP and Microsoft will not be able to support these configurations and need to defer to an eventual involved third-party that provided software to establish such architectures. Two of the categories are:
-- Storage soft appliances: There is a number of storage soft appliances offered in Azure marketplace. Some of the vendors offer own documentation on how to use those storage soft appliances on Azure related to SAP software. Support of configurations or deployments involving such storage soft appliances needs to be provided by the vendor of those storage soft appliances. This fact is also manifested in [SAP support note #2015553](https://launchpad.support.sap.com/#/notes/2015553)-- High Availability frameworks: Only Pacemaker and Windows Server Failover Cluster are supported high availability frameworks for SAP workload on Azure. As mentioned earlier, the solution of SIOS `Datakeeper` is described and documented by Microsoft. Nevertheless, the components of SIOS `Datakeeper` need to be supported through SIOS as the vendor providing those components. SAP also listed other certified high availability frameworks in various SAP notes. Some of them were certified by the third-party vendor for Azure as well. Nevertheless, support for configurations using those products need to be provided by the product vendor. Different vendors have different integration into the SAP support processes. You should clarify what support process works best for the particular vendor before deciding to use the product in SAP configurations deployed on Azure.-- Shared disk clusters where database files are residing on the shared disks are not supported with the exception of maxDB. For all other database, the supported solution is to have separate storage locations instead of an SMB or NFS share or shared disk to configure high-availability scenarios
+- Storage soft appliances: There's a number of storage soft appliances offered in Azure marketplace. Some of the vendors offer own documentation on how to use their storage soft appliances on Azure related to SAP software. Support of configurations or deployments involving such storage soft appliances needs to be provided by the vendor of those storage soft appliances. This fact is also manifested in [SAP support note #2015553](https://launchpad.support.sap.com/#/notes/2015553)
+- High Availability frameworks: Only Pacemaker and Windows Server Failover Cluster are supported high availability frameworks for SAP workload on Azure. As mentioned earlier, the solution of SIOS `Datakeeper` is described and documented by Microsoft. Nevertheless, the components of SIOS `Datakeeper` need to be supported through SIOS as the vendor providing those components. SAP also listed other certified high availability frameworks in various SAP notes. Some of them were certified by the third-party vendor for Azure as well. Nevertheless, support for configurations using those products need to be provided by the product vendor. Different vendors have different integration into the SAP support processes. You should clarify what support process works best for the particular vendor before deciding to use the product with SAP configurations deployed on Azure.
+- Shared disk clusters where database files are residing on the shared disks aren't supported with the exception of maxDB. For all other database, the supported solution is to have separate storage locations instead of an SMB or NFS share or shared disk to configure high-availability scenarios
-Other scenarios, which are not supported are scenarios like:
+Other scenarios, which aren't supported are scenarios like:
- Deployment scenarios that introduce a larger network latency between the SAP application tier and the SAP DBMS tier in SAP's common architecture as shown in NetWeaver, S/4HANA and e.g. `Hybris`. This includes: - Deploying one of the tiers on-premises whereas the other tier is deployed in Azure - Deploying the SAP application tier of a system in a different Azure region than the DBMS tier
- - Deploying one tier in datacenters that are co-located to Azure and the other tier in Azure, except where such an architecture pattern are provided by an Azure native service
+ - Deploying one tier in datacenters that are co-located to Azure and the other tier in Azure, except where such an architecture patterns are provided by an Azure native service
- Deploying network virtual appliances between the SAP application tier and the DBMS layer - Leveraging storage that is hosted in datacenters co-located to Azure datacenter for the SAP DBMS tier or SAP global transport directory - Deploying the two layers with two different cloud vendors. For example, deploying the DBMS tier in Oracle Cloud Infrastructure and the application tier in Azure
Other scenarios, which are not supported are scenarios like:
- Deployment of SAP databases supported on Linux with database files located in NFS shares on top of ANF with the exception of SAP HANA, Oracle on Oracle Linux, and Db2 on Suse and Red Hat - Deployment of Oracle DBMS on any other guest OS than Windows and Oracle Linux. See also [SAP support note #2039619](https://launchpad.support.sap.com/#/notes/2039619)
-Scenario(s) that we did not test and therefore have no experience with list like:
+Scenario(s) that we didn't test and therefore have no experience with list like:
- Azure Site Recovery replicating DBMS layer VMs. As a result, we recommend leveraging the database native asynchronous replication functionality for potential disaster recovery configuration