Updates from: 09/30/2022 01:15:26
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
$replicaSetParams = @{
Location = $AzureLocation SubnetId = "/subscriptions/$AzureSubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Network/virtualNetworks/$VnetName/subnets/DomainServices" }
-$replicaSet = New-AzADDomainServiceReplicaSet @replicaSetParams
+$replicaSet = New-AzADDomainServiceReplicaSetObject @replicaSetParams
$domainServiceParams = @{ Name = $ManagedDomainName
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
To switch the directory in the Azure portal, click the user account name in the
![External users can switch directory.](media/concept-registration-mfa-sspr-combined/switch-directory.png)
+Or, you can specify a tenant by URL to access security information.
+
+`https://mysignins.microsoft.com/security-info?tenant=<Tenant Name>`
+
+`https://mysignins.microsoft.com/security-info/?tenantId=<Tenant ID>`
+ ## Next steps To get started, see the tutorials to [enable self-service password reset](tutorial-enable-sspr.md) and [enable Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
The two-gate policy requires two pieces of authentication data, such as an email
* Power BI service administrator * Privileged Authentication administrator * Privileged role administrator
- * SharePoint administrator
* Security administrator * Service support administrator
+ * SharePoint administrator
* Skype for Business administrator * User administrator
active-directory Concept Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-writeback.md
Password writeback provides the following features:
To get started with SSPR writeback, complete either one or both of the following tutorials: -- [Tutorial: Enable self-service password reset (SSPR) writeback](tutorial-enable-cloud-sync-sspr-writeback.md)
+- [Tutorial: Enable self-service password reset (SSPR) writeback](tutorial-enable-sspr-writeback.md)
- [Tutorial: Enable Azure Active Directory Connect cloud sync self-service password reset writeback to an on-premises environment (Preview)](tutorial-enable-cloud-sync-sspr-writeback.md) ## Azure AD Connect and cloud sync side-by-side deployment
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
For additional details see: [Understanding the certificate revocation process](.
[!INCLUDE [Set-AzureAD](../../../includes/active-directory-authentication-set-trusted-azuread.md)]
+## Step 2: Enable CBA on the tenant
-## Step 2: Configure authentication binding policy
+To enable the certificate-based authentication in the Azure Portal, complete the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) as an Authentication Policy Administrator.
+1. Select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
+1. Under **Manage**, select **Authentication methods** > **Certificate-based Authentication**.
+1. Under **Basics**, select **Yes** to enable CBA.
+1. CBA can be enabled for a targeted set of users.
+ 1. Click **All users** to enable all users.
+ 1. Click **Select users** to enable selected users or groups.
+ 1. Click **+ Add users**, select specific users and groups.
+ 1. Click **Select** to add them.
+
+ :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/enable.png" alt-text="Screenshot of how to enable CBA.":::
+
+Once certificate-based authentication is enabled on the tenant, all users in the tenant will see the option to sign in with a certificate. Only users who are enabled for certificate-based authentication will be able to authenticate using the X.509 certificate.
+
+>[!NOTE]
+>The network administrator should allow access to certauth endpoint for the customerΓÇÖs cloud environment in addition to login.microsoftonline.com. Disable TLS inspection on the certauth endpoint to make sure the client certificate request succeeds as part of the TLS handshake.
++
+## Step 3: Configure authentication binding policy
The authentication binding policy helps determine the strength of authentication to either a single factor or multi factor. An admin can change the default value from single-factor to multifactor and configure custom policy rules by mapping to issuer Subject or policy OID fields in the certificate.
To enable the certificate-based authentication and configure user bindings in th
1. Click **Ok** to save any custom rule.
-## Step 3: Configure username binding policy
+## Step 4: Configure username binding policy
The username binding policy helps determine the user in the tenant. By default, we map Principal Name in the certificate to onPremisesUserPrincipalName in the user object to determine the user.
The final configuration will look like this image:
:::image type="content" border="true" source="./media/how-to-certificate-based-authentication/final.png" alt-text="Screenshot of the final configuration.":::
-## Step 4: Enable CBA on the tenant
-
-To enable the certificate-based authentication in the Azure MyApps portal, complete the following steps:
-
-1. Sign in to the [MyApps portal](https://myapps.microsoft.com/) as an Authentication Policy Administrator.
-1. Select **Azure Active Directory**, then choose **Security** from the menu on the left-hand side.
-1. Under **Manage**, select **Authentication methods** > **Certificate-based Authentication**.
-1. Under **Basics**, select **Yes** to enable CBA.
-1. CBA can be enabled for a targeted set of users.
- 1. Click **All users** to enable all users.
- 1. Click **Select users** to enable selected users or groups.
- 1. Click **+ Add users**, select specific users and groups.
- 1. Click **Select** to add them.
-
- :::image type="content" border="true" source="./media/how-to-certificate-based-authentication/enable.png" alt-text="Screenshot of how to enable CBA.":::
-
-Once certificate-based authentication is enabled on the tenant, all users in the tenant will see the option to sign in with a certificate. Only users who are enabled for certificate-based authentication will be able to authenticate using the X.509 certificate.
-
->[!NOTE]
->The network administrator should allow access to certauth endpoint for the customerΓÇÖs cloud environment in addition to login.microsoftonline.com. Disable TLS inspection on the certauth endpoint to make sure the client certificate request succeeds as part of the TLS handshake.
- ## Step 5: Test your configuration This section covers how to test your certificate and custom authentication binding rules.
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md
Smart lockout tracks the last three bad password hashes to avoid incrementing th
> [!NOTE] > Hash tracking functionality isn't available for customers with pass-through authentication enabled as authentication happens on-premises not in the cloud.
-Federated deployments that use AD FS 2016 and AF FS 2019 can enable similar benefits using [AD FS Extranet Lockout and Extranet Smart Lockout](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection).
+Federated deployments that use AD FS 2016 and AD FS 2019 can enable similar benefits using [AD FS Extranet Lockout and Extranet Smart Lockout](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection).
Smart lockout is always on, for all Azure AD customers, with these default settings that offer the right mix of security and usability. Customization of the smart lockout settings, with values specific to your organization, requires Azure AD Premium P1 or higher licenses for your users.
active-directory Tutorial Enable Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
To set up the appropriate permissions for password writeback to occur, complete
[ ![Set the appropriate permissions in Active Users and Computers for the account that is used by Azure AD Connect](media/tutorial-enable-sspr-writeback/set-ad-ds-permissions-cropped.png) ](media/tutorial-enable-sspr-writeback/set-ad-ds-permissions.png#lightbox)
+1. When ready, select **Apply / OK** to apply the changes.
+1. From the **Permissions** tab, select **Add**.
+1. For **Principal**, select the account that permissions should be applied to (the account used by Azure AD Connect).
+1. In the **Applies to** drop-down list, select **This object and all descendant objects**
+1. Under *Permissions*, select the box for the following option:
+ * **Unexpire Password**
1. When ready, select **Apply / OK** to apply the changes and exit any open dialog boxes. When you update permissions, it might take up to an hour or more for these permissions to replicate to all the objects in your directory.
active-directory Tutorial Enable Sspr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr.md
To finish this tutorial, you need the following resources and privileges:
* A working Azure AD tenant with at least an Azure AD free or trial license enabled. In the Free tier, SSPR only works for cloud users in Azure AD. Password change is supported in the Free tier, but password reset is not. * For later tutorials in this series, you'll need an Azure AD Premium P1 or trial license for on-premises password writeback. * If needed, [create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An account with *Global Administrator* privileges.
+* An account with *Global Administrator* or *Authentication Policy Administrator* privileges.
* A non-administrator user with a password you know, like *testuser*. You'll test the end-user SSPR experience using this account in this tutorial. * If you need to create a user, see [Quickstart: Add new users to Azure Active Directory](../fundamentals/add-users-azure-active-directory.md). * A group that the non-administrator user is a member of, likes *SSPR-Test-Group*. You'll enable SSPR for this group in this tutorial.
Azure AD lets you enable SSPR for *None*, *Selected*, or *All* users. This granu
In this tutorial, set up SSPR for a set of users in a test group. Use the *SSPR-Test-Group* and provide your own Azure AD group as needed:
-1. Sign in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* permissions.
+1. Sign in to the [Azure portal](https://portal.azure.com) using an account with *global administrator* or *authentication policy administrator* permissions.
1. Search for and select **Azure Active Directory**, then select **Password reset** from the menu on the left side. 1. From the **Properties** page, under the option *Self service password reset enabled*, choose **Selected**. 1. If your group isn't visible, choose **No groups selected**, browse for and select your Azure AD group, like *SSPR-Test-Group*, and then choose *Select*.
active-directory Permissions Management Trial User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/permissions-management-trial-user-guide.md
+
+ Title: Trial User Guide - Microsoft Entra Permissions Management
+description: How to get started with your Entra Permissions free trial
++++++ Last updated : 09/01/2022+++
+# Trial user guide: Microsoft Entra Permissions Management
+
+Welcome to the Microsoft Entra Permissions Management trial user guide!
+
+This user guide is a simple guide to help you make the most of your free trial, including the Permissions Management Cloud Infrastructure Assessment to help you identify and remediate the most critical permission risks across your multicloud infrastructure. Using the suggested steps in this user guide from the Microsoft Identity team, you'll learn how Permissions Management can assist you to protect all your users and data.
+
+## What is Permissions Management?
+
+Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities including both workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions.
+
+Permissions Management helps your organization tackle cloud permissions by enabling the capabilities to continuously discover, remediate and monitor the activity of every unique user and workload identity operating in the cloud, alerting security and infrastructure teams to areas of unexpected or excessive risk.
+
+- Get granular cross-cloud visibility - Get a comprehensive view of every action performed by any identity on any resource.
+- Uncover permission risk - Assess permission risk by evaluating the gap between permissions granted and permissions used.
+- Enforce least privilege - Right-size permissions based on usage and activity and enforce permissions on-demand at cloud scale.
+- Monitor and detect anomalies - Detect anomalous permission usage and generate detailed forensic reports.
+
+![Diagram, schematic Description automatically generated](media/permissions-management-trial-user-guide/microsoft-entra-permissions-management-diagram.png)
++
+## Step 1: Set-up Permissions Management
+
+Before you enable Permissions Management in your organization:
+- You must have an Azure AD tenant. If you don't already have one, [create a free account](https://azure.microsoft.com/free/).
+- You must be eligible for or have an active assignment to the global administrator role as a user in that tenant.
+
+If the above points are met, continue with the following steps:
+
+1. [Enabling Permissions Management on your Azure AD tenant](../cloud-infrastructure-entitlement-management/onboard-enable-tenant.md#how-to-enable-permissions-management-on-your-azure-ad-tenant)
+2. Use the **Data Collectors** dashboard in Permissions Management to configure data collection settings for your authorization system. [Configure data collection settings](../cloud-infrastructure-entitlement-management/onboard-enable-tenant.md#configure-data-collection-settings).
+
+ Note that for each cloud platform, you will have 3 options for onboarding:
+
+ **Option 1 (Recommended): Automatically manage** ΓÇô this option allows subscriptions to be automatically detected and monitored without additional configuration.
+
+ **Option 2**: **Enter authorization systems** - you have the ability to specify only certain subscriptions to manage and monitor with MEPM (up to 10 per collector).
+
+ **Option 3**: **Select authorization systems** - this option detects all subscriptions that are accessible by the Cloud Infrastructure Entitlement Management application.
+
+ For information on how to onboard an AWS account, Azure subscription, or GCP project into Permissions Management, select one of the following articles and follow the instructions:
+ - [Onboard an AWS account](../cloud-infrastructure-entitlement-management/onboard-aws.md)
+ - [Onboard a Microsoft Azure subscription](../cloud-infrastructure-entitlement-management/onboard-azure.md)
+ - [Onboard a GCP project](../cloud-infrastructure-entitlement-management/onboard-gcp.md)
+3. [Enable or disable the controller after onboarding is complete](../cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md)
+4. [Add an account/subscription/project after onboarding is complete](../cloud-infrastructure-entitlement-management/onboard-add-account-after-onboarding.md)
+
+ **Actions to try:**
+
+ - [View roles/policies and requests for permission](../cloud-infrastructure-entitlement-management/ui-remediation.md#view-and-create-rolespolicies)
+ - [View information about roles/ policies](../cloud-infrastructure-entitlement-management/ui-remediation.md#view-and-create-rolespolicies)
+ - [View information about active and completed tasks](../cloud-infrastructure-entitlement-management/ui-tasks.md)
+ - [Create a role/policy](../cloud-infrastructure-entitlement-management/how-to-create-role-policy.md)
+ - [Clone a role/policy](../cloud-infrastructure-entitlement-management/how-to-clone-role-policy.md)
+ - [Modify a role/policy](../cloud-infrastructure-entitlement-management/how-to-modify-role-policy.md)
+ - [Delete a role/policy](../cloud-infrastructure-entitlement-management/how-to-delete-role-policy.md)
+ - [Attach and detach policies for Amazon Web Services (AWS) identities](../cloud-infrastructure-entitlement-management/how-to-attach-detach-permissions.md)
+ - [Add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities](../cloud-infrastructure-entitlement-management/how-to-add-remove-role-task.md)
+ - [Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities](../cloud-infrastructure-entitlement-management/how-to-revoke-task-readonly-status.md)
+ - [Create or approve a request for permissions](../cloud-infrastructure-entitlement-management/how-to-create-approve-privilege-request.md) Request permissions on-demand for one-time use or on a schedule. These permissions will automatically be revoked at the end of the requested period.
+
+## Step 2: Discover & assess
+
+Improve your security posture by getting comprehensive and granular visibility to enforce the principle of least privilege access across your entire multicloud environment. The Permissions Management dashboard gives you an overview of your permission profile and locates where the riskiest identities and resources are across your digital estate.
+
+The dashboard leverages the Permission Creep Index, which is a single and unified metric, ranging from 0 to 100, that calculates the gap between permissions granted and permissions used over a specific period. The higher the gap, the higher the index and the larger the potential attack surface. The Permission Creep Index only considers high-risk actions, meaning any action that can cause data leakage, service disruption degradation, or security posture change. Permissions Management creates unique activity profiles for each identity and resource which are used as a baseline to detect anomalous behaviors.
+
+1. [View risk metrics in your authorization system](../cloud-infrastructure-entitlement-management/ui-dashboard.md#view-metrics-related-to-avoidable-risk) in the Permissions Management Dashboard. This information is available for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
+ 1. View metrics related to avoidable risk - these metrics allow the Permission Management administrator to identify areas where they can reduce risks related to the principle of least permissions. Information includes [the Permissions Creep Index (PCI)](../cloud-infrastructure-entitlement-management/ui-dashboard.md#the-pci-heat-map) and [Analytics Dashboard](../cloud-infrastructure-entitlement-management/usage-analytics-home.md).
+
+
+ 1. Understand the [components of the Permissions Management Dashboard.](../cloud-infrastructure-entitlement-management/ui-dashboard.md#components-of-the-permissions-management-dashboard)
+
+2. View data about the activity in your authorization system
+
+ 1. [View user data on the PCI heat map](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-user-data-on-the-pci-heat-map).
+ > [!NOTE]
+ > The higher the PCI, the higher the risk.
+
+ 2. [View information about users, roles, resources, and PCI trends](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-information-about-users-roles-resources-and-pci-trends)
+ 3. [View identity findings](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-identity-findings)
+ 4. [View resource findings](../cloud-infrastructure-entitlement-management/product-dashboard.md#view-resource-findings)
+3. [Configure your settings for data collection](../cloud-infrastructure-entitlement-management/product-data-sources.md) - use the **Data Collectors** dashboard in Permissions Management to view and configure settings for collecting data from your authorization systems.
+4. [View organizational and personal information](../cloud-infrastructure-entitlement-management/product-account-settings.md) - the **Account settings** dashboard in Permissions Management allows you to view personal information, passwords, and account preferences.
+5. [Select group-based permissions settings](../cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md)
+6. [View information about identities, resources and tasks](../cloud-infrastructure-entitlement-management/usage-analytics-home.md) - the **Analytics** dashboard displays detailed information about:
+ 1. **Users**: Tracks assigned permissions and usage by users. For more information, see View analytic information about users.
+ 2. **Groups**: Tracks assigned permissions and usage of the group and the group members. For more information, see View analytic information about groups
+ 3. **Active Resources**: Tracks resources that have been used in the last 90 days. For more information, see View analytic information about active resources
+ 4. **Active Tasks**: Tracks tasks that have been performed in the last 90 days. For more information, see View analytic information about active tasks
+ 5. **Access Keys**: Tracks the permission usage of access keys for a given user. For more information, see View analytic information about access keys
+ 6. **Serverless Functions**: Tracks assigned permissions and usage of the serverless functions for AWS only. For more information, see View analytic information about serverless functions
+
+ System administrators can use this information to make decisions about granting permissions and reducing risk on unused permissions.
+
+## Step 3: Remediate & manage
+
+Right-size excessive and/or unused permissions in only a few clicks. Avoid any errors caused by manual processes and implement automatic remediation on all unused permissions for a predetermined set of identities and on a regular basis. You can also grant new permissions on-demand for just-in-time access to specific cloud resources.
+
+There are two facets to removing unused permissions: least privilege policy creation (remediation) and permissions-on-demand. With remediation, an administrator can create policies that remove unused permissions (also known as right-sizing permissions) to achieve least privilege across their multicloud environment.
+
+- [Manage roles/policies and permissions requests using the Remediation dashboard](../cloud-infrastructure-entitlement-management/ui-remediation.md).
+
+ The dashboard includes six subtabs:
+
+ - **Roles/Policies**: Use this subtab to perform Create Read Update Delete (CRUD) operations on roles/policies.
+ - **Role/Policy Name** ΓÇô Displays the name of the role or the AWS policy
+ - Note: An exclamation point (!) circled in red means the role or AWS policy has not been used.
+ - Role Type ΓÇô Displays the type of role or AWS policy
+ - **Permissions**: Use this subtab to perform Read Update Delete (RUD) on granted permissions.
+ - **Role/Policy Template**: Use this subtab to create a template for roles/policies template.
+ - **Requests**: Use this subtab to view approved, pending, and processed Permission on Demand (POD) requests.
+ - **My Requests**: Use this tab to manage lifecycle of the POD request either created by you or needs your approval.
+ - **Settings**: Use this subtab to select **Request Role/Policy Filters**, **Request Settings**, and **Auto-Approve** settings.
+
+**Best Practices for Remediation:**
+
+- **Creating activity-based roles/policies:** High-risk identities will be monitored and right-sized based on their historical activity. Unnecessary risk to leave unused high-risk permissions assigned to identities.
+- **Removing direct role assignments:** EPM will generate reports based on role assignments. In cases where high-risk roles are directly assigned, the Remediation permissions tab can query those identities and remove direct role assignments.
+- **Assigning read-only permissions:** Identities that are inactive or have high-risk permissions to production environments can be assigned read-only status. Access to production environments can be governed via Permissions On-demand.
+
+**Best Practices for Permissions On-demand:**
+
+- **Requesting Delete Permissions:** No user will have delete permissions unless they request them and are approved.
+- **Requesting Privileged Access:** High-privileged access is only granted through just-enough permissions and just-in-time access.
+- **Requesting Periodic Access:** Schedule reoccurring daily, weekly, or monthly permissions that are time-bound and revoked at the end of period.
+- Manage users, roles and their access levels with the User management dashboard.
+
+ **Actions to try:**
+
+ - [Manage users](../cloud-infrastructure-entitlement-management/ui-user-management.md#manage-users)
+ - [Manage groups](../cloud-infrastructure-entitlement-management/ui-user-management.md#manage-groups)
+ - [Select group-based permissions settings](../cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md)
+
+## Step 4: Monitor & alert
+
+Prevent data breaches caused by misuse and malicious exploitation of permissions with anomaly and outlier detection that alerts on any suspicious activity. Permissions Management continuously updates your Permission Creep Index and flags any incident, then immediately informs you with alerts via email. To further support rapid investigation and remediation, you can generate context-rich forensic reports around identities, actions, and resources.
+
+- Use queries to view information about user access with the **Audit** dashboard in Permissions Management. You can get an overview of queries a Permissions Management user has created to review how users access their authorization systems and accounts. The following options display at the top of the **Audit** dashboard:
+- A tab for each existing query. Select the tab to see details about the query.
+- **New Query**: Select the tab to create a new query.
+- **New tab (+)**: Select the tab to add a **New Query** tab.
+- **Saved Queries**: Select to view a list of saved queries.
+
+ **Actions to try:**
+
+ - [Use a query to view information](../cloud-infrastructure-entitlement-management/ui-audit-trail.md)
+ - [Create a custom query](../cloud-infrastructure-entitlement-management/how-to-create-custom-queries.md)
+ - [Generate an on-demand report from a query](../cloud-infrastructure-entitlement-management/how-to-audit-trail-results.md)
+ - [Filter and query user activity](../cloud-infrastructure-entitlement-management/product-audit-trail.md)
+
+Use the **Activity triggers** dashboard to view information and set alerts and triggers.
+
+- Set activity alerts and triggers
+
+ Our customizable machine learning-powered anomaly and outlier detection alerts will notify you of any suspicious activity such as deviations in usage profiles or abnormal access times. Alerts can be used to alert on permissions usage, access to resources, indicators of compromise, insider threats, or to track previous incidents.
+
+ **Actions to try**
+
+ - [View information about alerts and alert triggers](../cloud-infrastructure-entitlement-management/ui-triggers.md)
+ - [Create and view activity alerts and alert triggers](../cloud-infrastructure-entitlement-management/how-to-create-alert-trigger.md)
+ - [Create and view rule-based anomaly alerts and anomaly triggers](../cloud-infrastructure-entitlement-management/product-rule-based-anomalies.md)
+ - [Create and view statistical anomalies and anomaly triggers](../cloud-infrastructure-entitlement-management/product-statistical-anomalies.md)
+ - [Create and view permission analytics triggers](../cloud-infrastructure-entitlement-management/product-permission-analytics.md)
+
+**Best Practices for Custom Alerts:**
+
+- Permission assignments done outside of approved administrators
+ - Examples:
+
+ Example: Any activity done by root:
+
+ ![Diagram, Any activity done by root user in AWS.](media/permissions-management-trial-user-guide/custom-alerts-1.png)
+
+ Alert for monitoring any direct Azure role assignment
+
+ ![Diagram, Alert for monitoring any direct Azure role assignment done by anyone other than Admin user.](media/permissions-management-trial-user-guide/custom-alerts-2.png)
+
+- Access to critical sensitive resources
+
+ Example: Alert for monitoring any action on Azure resources
+
+ ![Diagram, Alert for monitoring any action on Azure resources.](media/permissions-management-trial-user-guide/custom-alerts-3.png)
+
+- Use of break glass accounts like root in AWS, global admin in Azure AD accessing subscriptions, etc.
+
+ Example: BreakGlass users should be used for emergency access only.
+
+ ![Diagram, Example of break glass account users used for emergency access only.](media/permissions-management-trial-user-guide/custom-alerts-4.png)
+
+- Create and view reports
+
+ To support rapid remediation, you can set up security reports to be delivered at custom intervals. Permissions Management has various types of system report types available that capture specific sets of data by cloud infrastructure (AWS, Azure, GCP), by account/subscription/project, and more. Reports are fully customizable and can be delivered via email at pre-configured intervals.
+
+ These reports enable you to:
+
+ - Make timely decisions.
+ - Analyze trends and system/user performance.
+ - Identify trends in data and high-risk areas so that management can address issues more quickly and improve their efficiency.
+ - Automate data analytics in an actionable way.
+ - Ensure compliance with audit requirements for periodic reviews of **who has access to what,**
+ - Look at views into **Separation of Duties** for security hygiene to determine who has admin permissions.
+ - See data for **identity governance** to ensure inactive users are decommissioned because they left the company or to remove vendor accounts that have been left behind, old consultant accounts, or users who as parts of the Joiner/Mover/Leaver process have moved onto another role and are no longer using their access. Consider this a fail-safe to ensure dormant accounts are removed.
+ - Identify over-permissioned access to later use the Remediation to pursue **Zero Trust and least privileges.**
+
+ **Example of** [**Permissions Management Report**](https://microsoft.sharepoint.com/:v:/t/MicrosoftEntraPermissionsManagementAssets/EQWmUsMsdkZEnFVv-M9ZoagBd4B6JUQ2o7zRTupYrfxbGA)
+
+ **Actions to try**
+ - [View system reports in the Reports dashboard](../cloud-infrastructure-entitlement-management/product-reports.md)
+ - [View a list and description of system reports](../cloud-infrastructure-entitlement-management/all-reports.md)
+ - [Generate and view a system report](../cloud-infrastructure-entitlement-management/report-view-system-report.md)
+ - [Create, view, and share a custom report](../cloud-infrastructure-entitlement-management/report-create-custom-report.md)
+ - [Generate and download the Permissions analytics report](../cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md)
+
+**Key Reports to Monitor:**
+
+- **Permissions Analytics Report:** lists the key permission risks including Super identities, Inactive identities, Over-provisioned active identities, and more
+- **Group entitlements and Usage reports:** Provides guidance on cleaning up directly assigned permissions
+- **Access Key Entitlements and Usage reports**: Identifies high risk service principals with old secrets that havenΓÇÖt been rotated every 90 days (best practice) or decommissioned due to lack of use (as recommended by the Cloud Security Alliance).
+
+## Next steps
+
+For more information about Permissions Management, see:
+
+**Microsoft Learn**: [Permissions management](../cloud-infrastructure-entitlement-management/index.yml).
+
+**Datasheet:** <https://aka.ms/PermissionsManagementDataSheet>
+
+**Solution Brief:** <https://aka.ms/PermissionsManagementSolutionBrief>
+
+**White Paper:** <https://aka.ms/CIEMWhitePaper>
+
+**Infographic:** <https://aka.ms/PermissionRisksInfographic>
+
+**Security paper:** [2021 State of Cloud Permissions Risks](https://scistorageprod.azureedge.net/assets/2021%20State%20of%20Cloud%20Permission%20Risks.pdf?sv=2019-07-07&sr=b&sig=Sb17HibpUtJm2hYlp6GYlNngGiSY5GcIs8IfpKbRlWk%3D&se=2022-05-27T20%3A37%3A22Z&sp=r)
+
+**Permissions Management Glossary:** <https://aka.ms/PermissionsManagementGlossary>
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/apple-sso-plugin.md
Common settings:
- `https://sts.windows.net` - `https://login.partner.microsoftonline.cn` - `https://login.chinacloudapi.cn`
- - `https://login.microsoftonline.de`
- `https://login.microsoftonline.us` - `https://login.usgovcloudapi.net` - `https://login-us.microsoftonline.com`
active-directory Multi Service Web App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-user.md
public class IndexModel : PageModel
# [Node.js](#tab/programming-language-nodejs)
-The web app gets the user's access token from the incoming requests header, which is then passed down to Microsoft Graph client to make an authenticated request to the `/me` endpoint.
+Using the [microsoft-identity-express](https://github.com/Azure-Samples/microsoft-identity-express) package, the web app gets the user's access token from the incoming requests header. microsoft-identity-express detects that the web app is hosted on App Service and gets the access token from the App Service authentication/authorization module. The access token is then passed down to the Microsoft Graph SDK client to make an authenticated request to the `/me` endpoint.
To see this code as part of a sample application, see *graphController.js* in the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/tree/main/2-WebApp-graphapi-on-behalf).
+> [!NOTE]
+> The microsoft-identity-express package isn't required in your web app for basic authentication/authorization or to authenticate requests with Microsoft Graph. It's possible to [securely call downstream APIs](../../app-service/tutorial-auth-aad.md#call-api-securely-from-server-code) with only the App Service authentication/authorization module enabled.
+>
+> However, the App Service authentication/authorization is designed for more basic authentication scenarios. Later, when your web app needs to handle more complex scenarios, you can disable the App Service authentication/authorization module and microsoft-identity-express will already be a part of your app.
+ ```nodejs const graphHelper = require('../utils/graphHelper');
If you're finished with this tutorial and no longer need the web app or associat
## Next steps > [!div class="nextstepaction"]
-> [App service accesses Microsoft Graph as the app](multi-service-web-app-access-microsoft-graph-as-app.md)
+> [App service accesses Microsoft Graph as the app](multi-service-web-app-access-microsoft-graph-as-app.md)
active-directory Azure Ad Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/azure-ad-account.md
Azure AD account is an identity provider option for your self-service sign-up us
![Azure AD account in a self-service sign-up user flow](media/azure-ad-account/azure-ad-account-user-flow.png) ## Verifying the application's publisher domain
-As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md) ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../develop/publisher-verification-overview.md) about this change.) Note that for Azure AD user flows, the publisherΓÇÖs domain appears only when using a [Microsoft account](microsoft-account.md) or other Azure AD tenant as the identity provider. To meet these new requirements, do the following:
+As of November 2020, new application registrations show up as unverified in the user consent prompt unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md), ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../develop/publisher-verification-overview.md) about this change.) For Azure AD user flows, the publisherΓÇÖs domain appears only when using a [Microsoft account](microsoft-account.md) or other Azure AD tenant as the identity provider. To meet these new requirements, follow these steps:
1. [Verify your company identity using your Microsoft Partner Network (MPN) account](/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact. 1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options:
As of November 2020, new application registrations show up as unverified in the
## Next steps - [Add Azure Active Directory B2B collaboration users](add-users-administrator.md)-- [Add self-service sign-up to an app](self-service-sign-up-user-flow.md)
+- [Add self-service sign-up to an app](self-service-sign-up-user-flow.md)
active-directory Automate Provisioning To Applications Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/automate-provisioning-to-applications-introduction.md
Thousands of organizations are running Azure AD cloud-hosted services, with its
| What | From | To | Read | | - | - | - | - | | Employees and contractors| HR systems| AD and Azure AD| [Connect identities with your system of record](automate-provisioning-to-applications-solutions.md) |
-| Existing AD users and groups| AD| Azure AD| [Synchronize identities between Azure AD and Active Directory](automate-provisioning-to-applications-solutions.md) |
+| Existing AD users and groups| AD DS| Azure AD| [Synchronize identities between Azure AD and Active Directory](automate-provisioning-to-applications-solutions.md) |
| Users, groups| Azure AD| SaaS and on-prem apps| [Automate provisioning to non-Microsoft applications](../governance/entitlement-management-organization.md) | | Access rights| Azure AD Identity Governance| SaaS and on-prem apps| [Entitlement management](../governance/entitlement-management-overview.md) | | Existing users and groups| AD, SaaS and on-prem apps| Identity governance (so I can review them)| [Azure AD Access reviews](../governance/access-reviews-overview.md) |
active-directory Automate Provisioning To Applications Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/automate-provisioning-to-applications-solutions.md
Previously updated : 09/23/2022 Last updated : 09/29/2022 - it-pro
The Azure AD provisioning service enables organizations to [bring identities fro
### On-premises HR + joining multiple data sources
-To create a full user profile for an employee identity, organizations often merge information from multiple HR systems, databases, and other user data stores. MIM provides a rich set of [connectors](https://learn.microsoft.com/microsoft-identity-manager/supported-management-agents) and integration solutions interoperating with heterogeneous platforms.
+To create a full user profile for an employee identity, organizations often merge information from multiple HR systems, databases, and other user data stores. MIM provides a rich set of [connectors](https://learn.microsoft.com/microsoft-identity-manager/supported-management-agents) and integration solutions interoperating with heterogeneous platforms both on-premises and in the cloud.
MIM offers [rule extension](/previous-versions/windows/desktop/forefront-2010/ms698810(v=vs.100)?redirectedfrom=MSDN) and [workflow capabilities](https://microsoft.github.io/MIMWAL/) features for advanced scenarios requiring data transformation and consolidation from multiple sources. These connectors, rule extensions, and workflow capabilities enable organizations to aggregate user data in the MIM metaverse to form a single identity for each user. The identity can be [provisioned into downstream systems](/microsoft-identity-manager/microsoft-identity-manager-2016-supported-platforms) such as AD DS.
The scenarios are divided by the direction of synchronization needed, and are li
Use the numbered sections in the next two section to cross reference the following table.
-**Synchronize identities from AD into Azure AD**
+**Synchronize identities from AD DS into Azure AD**
1. For users in AD that need access to Office 365 or other applications that are connected to Azure AD, Azure AD Connect cloud sync is the first solution to explore. It provides a lightweight solution to create users in Azure AD, manage password rests, and synchronize groups. Configuration and management are primarily done in the cloud, minimizing your on-premises footprint. It provides high-availability and automatic failover, ensuring password resets and synchronization continue, even if there's an issue with on-premises servers. 1. For complex, large-scale AD to Azure AD sync needs such as synchronizing groups over 50,000 and device sync, customers can use Azure AD Connect sync to meet their needs.
-**Synchronize identities from Azure AD into AD**
+**Synchronize identities from Azure AD into AD DS**
As customers transition identity management to the cloud, more users and groups are created directly in Azure AD. However, they still need a presence on-premises in AD DS to access various resources.
-3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to on-premises Windows-Integrated Authentication or Kerberos-based applications.
+3. When an external user from a partner organization is created in Azure AD using B2B, MIM can automatically provision them [into AD DS](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) and give those guests access to [on-premises Windows-Integrated Authentication or Kerberos-based applications](https://learn.microsoft.com/azure/active-directory/external-identities/hybrid-cloud-to-on-premises). Alternatively, customers can user [PowerShell scripts](https://github.com/Azure-Samples/B2B-to-AD-Sync) to automate the creation of guest accounts on-premises.
1. When a group is created in Azure AD, it can be automatically synchronized to AD DS using [Azure AD Connect sync](../hybrid/how-to-connect-group-writeback-v2.md).
As customers transition identity management to the cloud, more users and groups
| 1 |Users, groups| AD DS| Azure AD| [Azure AD Connect Cloud Sync](https://learn.microsoft.com/azure/active-directory/cloud-sync/what-is-cloud-sync) | | 2 |Users, groups, devices| AD DS| Azure AD| [Azure AD Connect Sync](https://learn.microsoft.com/azure/active-directory/hybrid/whatis-azure-ad-connect) | | 3 |Groups| Azure AD| AD DS| [Azure AD Connect Sync](../hybrid/how-to-connect-group-writeback-v2.md) |
-| 4 |Guest accounts| Azure AD| AD DS| [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario) |
+| 4 |Guest accounts| Azure AD| AD DS| [MIM](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario), [PowerShell](https://github.com/Azure-Samples/B2B-to-AD-Sync)|
| 5 |Users, groups| Azure AD| Managed AD| [Azure AD Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | The table depicts common scenarios and the recommended technology.
After users are provisioned into Azure AD, use Lifecycle Workflows (LCW) to auto
### Reconcile changes made directly in the target system
-Organizations often need a complete audit trail of what users have access to applications containing data subject to regulation. To provide an audit trail, any access provided to a user directly must be traceable through the system of record. MIM provides the [reconciliation capabilities](/microsoft-identity-manager/mim-how-provision-users-adds) to detect changes made directly in a target system and roll back the changes. In addition to detecting changes in target applications, MIM can import identities from third party applications to Azure AD. These applications often augment the set of user records that originated in the HR system.
+Organizations often need a complete audit trail of what users have access to applications containing data subject to regulation. To provide an audit trail, any access provided to a user directly must be traceable through the system of record. MIM provides the reconciliation capabilities to detect changes made directly in a target system and roll back the changes. In addition to detecting changes in target applications, MIM can import identities from third party applications to Azure AD. These applications often augment the set of user records that originated in the HR system.
### Next steps
active-directory Security Operations Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-applications.md
Monitor changes to application configuration. Specifically, configuration change
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | |-|-|-|-|-|
-| Dangling URI| High| Azure AD Logs and Application Registration| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| For example, look for dangling URIs that point to a domain name that no longer exists or one that you donΓÇÖt explicitly own.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/URLAddedtoApplicationfromUnknownDomain.yaml)<br><br>[Link to Sigma repo](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
+| Dangling URI| High| Azure AD Logs and Application Registration| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| For example, look for dangling URIs that point to a domain name that no longer exists or one that you donΓÇÖt explicitly own.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/URLAddedtoApplicationfromUnknownDomain.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) |
| Redirect URI configuration changes| High| Azure AD logs| Service-Core Directory, Category-ApplicationManagement<br>Activity: Update Application<br>Success ΓÇô Property Name AppAddress| Look for URIs not using HTTPS*, URIs with wildcards at the end or the domain of the URL, URIs that are NOT unique to the application, URIs that point to a domain you don't control.<br>[Microsoft Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/AuditLogs/ApplicationRedirectURLUpdate.yaml)<br><br>[Sigma rules](https://github.com/SigmaHQ/sigma/tree/master/rules/cloud/azure) | Alert when these changes are detected.
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
Lifecycle Workflows come with many pre-configured tasks that are designed to aut
Lifecycle Workflow's built-in tasks each include an identifier, known as **taskDefinitionID**, and can be used to create either new workflows from scratch, or inserted into workflow templates so that they fit the needs of your organization. For more information on templates available for use with Lifecycle Workflows, see: [Lifecycle Workflow Templates](lifecycle-workflow-templates.md).
-Lifecycle Workflows currently support the following tasks:
-|Task |taskDefinitionID |
-|||
-|[Send welcome email to new hire](lifecycle-workflow-tasks.md#send-welcome-email-to-new-hire) | 70b29d51-b59a-4773-9280-8841dfd3f2ea |
-|[Generate Temporary Access Pass and send via email to user's manager](lifecycle-workflow-tasks.md#generate-temporary-access-pass-and-send-via-email-to-users-manager) | 1b555e50-7f65-41d5-b514-5894a026d10d |
-|[Add user to groups](lifecycle-workflow-tasks.md#add-user-to-groups) | 22085229-5809-45e8-97fd-270d28d66910 |
-|[Add user to teams](lifecycle-workflow-tasks.md#add-user-to-teams) | e440ed8d-25a1-4618-84ce-091ed5be5594 |
-|[Enable user account](lifecycle-workflow-tasks.md#enable-user-account) | 6fc52c9d-398b-4305-9763-15f42c1676fc |
-|[Run a custom task extension](lifecycle-workflow-tasks.md#run-a-custom-task-extension) | 4262b724-8dba-4fad-afc3-43fcbb497a0e |
-|[Disable user account](lifecycle-workflow-tasks.md#disable-user-account) | 1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950 |
-|[Remove user from selected group](lifecycle-workflow-tasks.md#remove-user-from-selected-groups) | 1953a66c-751c-45e5-8bfe-01462c70da3c |
-|[Remove users from all groups](lifecycle-workflow-tasks.md#remove-users-from-all-groups) | b3a31406-2a15-4c9a-b25b-a658fa5f07fc |
-|[Remove user from teams](lifecycle-workflow-tasks.md#remove-user-from-teams) | 06aa7acb-01af-4824-8899-b14e5ed788d6 |
-|[Remove user from all teams](lifecycle-workflow-tasks.md#remove-users-from-all-teams) | 81f7b200-2816-4b3b-8c5d-dc556f07b024 |
-|[Remove all license assignments from user](lifecycle-workflow-tasks.md#remove-all-license-assignments-from-user) | 8fa97d28-3e52-4985-b3a9-a1126f9b8b4e |
-|[Delete user](lifecycle-workflow-tasks.md#delete-user) | 8d18588d-9ad3-4c0f-99d0-ec215f0e3dff |
-|[Send email to manager before user last day](lifecycle-workflow-tasks.md#send-email-to-manager-before-user-last-day) | 52853a3e-f4e5-4eb8-bb24-1ac09a1da935 |
-|[Send email on users last day](lifecycle-workflow-tasks.md#send-email-on-users-last-day) | 9c0a1eaf-5bda-4392-9d9e-6e155bb57411 |
-|[Send offboarding email to users manager after their last day](lifecycle-workflow-tasks.md#send-offboarding-email-to-users-manager-after-their-last-day) | 6f22ddd4-b3a5-47a4-a846-0d7c201a49ce |
+Lifecycle Workflows currently support the following tasks:
+
+|Task |taskdefinitionID |Category |
+||||
+|[Send welcome email to new hire](lifecycle-workflow-tasks.md#send-welcome-email-to-new-hire) | 70b29d51-b59a-4773-9280-8841dfd3f2ea | Joiner |
+|[Generate Temporary Access Pass and send via email to user's manager](lifecycle-workflow-tasks.md#generate-temporary-access-pass-and-send-via-email-to-users-manager) | 1b555e50-7f65-41d5-b514-5894a026d10d | Joiner |
+|[Add user to groups](lifecycle-workflow-tasks.md#add-user-to-groups) | 22085229-5809-45e8-97fd-270d28d66910 | Joiner, Leaver
+|[Add user to teams](lifecycle-workflow-tasks.md#add-user-to-teams) | e440ed8d-25a1-4618-84ce-091ed5be5594 | Joiner, Leaver
+|[Enable user account](lifecycle-workflow-tasks.md#enable-user-account) | 6fc52c9d-398b-4305-9763-15f42c1676fc | Joiner, Leaver
+|[Run a custom task extension](lifecycle-workflow-tasks.md#run-a-custom-task-extension) | 4262b724-8dba-4fad-afc3-43fcbb497a0e | Joiner, Leaver
+|[Disable user account](lifecycle-workflow-tasks.md#disable-user-account) | 1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950 | Leaver
+|[Remove user from selected group](lifecycle-workflow-tasks.md#remove-user-from-selected-groups) | 1953a66c-751c-45e5-8bfe-01462c70da3c | Leaver
+|[Remove users from all groups](lifecycle-workflow-tasks.md#remove-users-from-all-groups) | b3a31406-2a15-4c9a-b25b-a658fa5f07fc | Leaver
+|[Remove user from teams](lifecycle-workflow-tasks.md#remove-user-from-teams) | 06aa7acb-01af-4824-8899-b14e5ed788d6 | Leaver |
+|[Remove user from all teams](lifecycle-workflow-tasks.md#remove-users-from-all-teams) | 81f7b200-2816-4b3b-8c5d-dc556f07b024 | Leaver |
+|[Remove all license assignments from user](lifecycle-workflow-tasks.md#remove-all-license-assignments-from-user) | 8fa97d28-3e52-4985-b3a9-a1126f9b8b4e | Leaver
+|[Delete user](lifecycle-workflow-tasks.md#delete-user) | 8d18588d-9ad3-4c0f-99d0-ec215f0e3dff | Leaver |
+|[Send email to manager before user last day](lifecycle-workflow-tasks.md#send-email-to-manager-before-user-last-day) | 52853a3e-f4e5-4eb8-bb24-1ac09a1da935 | Leaver |
+|[Send email on users last day](lifecycle-workflow-tasks.md#send-email-on-users-last-day) | 9c0a1eaf-5bda-4392-9d9e-6e155bb57411 | Leaver |
+|[Send offboarding email to users manager after their last day](lifecycle-workflow-tasks.md#send-offboarding-email-to-users-manager-after-their-last-day) | 6f22ddd4-b3a5-47a4-a846-0d7c201a49ce | Leaver |
## Common task parameters (preview)
active-directory Set Employee Leave Date Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/set-employee-leave-date-time.md
In delegated scenarios, the signed-in user needs the Global Administrator role t
Updating the employeeLeaveDateTime requires the User-LifeCycleInfo.ReadWrite.All application permission.
->[!NOTE]
-> The User-LifeCycleInfo.ReadWrite.All permissions is currently hidden and cannot be configured in Graph Explorer or the API permission blade of app registrations.
- ## Set employeeLeaveDateTime via PowerShell To set the employeeLeaveDateTime for a user using PowerShell enter the following information:
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
The following reference document provides an overview of a workflow created usin
|Parameter |Display String |Description |Admin Consent Required | |||||
-|LifecycleWorkflows.Read.All | Read all Lifecycle workflows, tasks, user states| Allows the app to list and read all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes
-|LifecycleWorkflows.ReadWrite.All | Read and write all lifecycle workflows, tasks, user states.| Allows the app to create, update, list, read and delete all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes
+|LifecycleWorkflows.Read.All | Read all lifecycle workflows and tasks.| Allows the app to list and read all workflows and tasks related to lifecycle workflows on behalf of the signed-in user.| Yes
+|LifecycleWorkflows.ReadWrite.All | Read and write all lifecycle workflows and tasks.| Allows the app to create, update, list, read and delete all workflows and tasks related to lifecycle workflows on behalf of the signed-in user.| Yes
## Parts of a workflow A workflow can be broken down in to the following three main parts.
active-directory Add Application Portal Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
Title: 'Quickstart: Enable single sign-on for an enterprise application'
+ Title: Enable single sign-on for an enterprise application
description: Enable single sign-on for an enterprise application in Azure Active Directory. -+ -+ Previously updated : 09/21/2021- Last updated : 09/29/2022+ #Customer intent: As an administrator of an Azure AD tenant, I want to enable single sign-on for an enterprise application.
-# Quickstart: Enable single sign-on for an enterprise application
+# Enable single sign-on for an enterprise application
-In this quickstart, you use the Azure Active Directory Admin Center to enable single sign-on (SSO) for an enterprise application that you added to your Azure Active Directory (Azure AD) tenant. After you configure SSO, your users can sign in by using their Azure AD credentials.
+In this article, you use the Azure Active Directory Admin Center to enable single sign-on (SSO) for an enterprise application that you added to your Azure Active Directory (Azure AD) tenant. After you configure SSO, your users can sign in by using their Azure AD credentials.
-Azure AD has a gallery that contains thousands of pre-integrated applications that use SSO. This quickstart uses an enterprise application named **Azure AD SAML Toolkit** as an example, but the concepts apply for most pre-configured enterprise applications in the gallery.
+Azure AD has a gallery that contains thousands of pre-integrated applications that use SSO. This article uses an enterprise application named **Azure AD SAML Toolkit 1** as an example, but the concepts apply for most pre-configured enterprise applications in the gallery.
-It is recommended that you use a non-production environment to test the steps in this quickstart.
+It is recommended that you use a non-production environment to test the steps in this article.
## Prerequisites
To enable SSO for an application:
1. In the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant. Search for and select the application that you want to use. For example, **Azure AD SAML Toolkit 1**. 1. In the **Manage** section of the left menu, select **Single sign-on** to open the **Single sign-on** pane for editing. 1. Select **SAML** to open the SSO configuration page. After the application is configured, users can sign in to it by using their credentials from the Azure AD tenant.
-1. The process of configuring an application to use Azure AD for SAML-based SSO varies depending on the application. For any of the enterprise applications in the gallery, use the link to find information about the steps needed to configure the application. The steps for the **Azure AD SAML Toolkit** are listed in this quickstart.
+1. The process of configuring an application to use Azure AD for SAML-based SSO varies depending on the application. For any of the enterprise applications in the gallery, use the **configuration guide** link to find information about the steps needed to configure the application. The steps for the **Azure AD SAML Toolkit 1** are listed in this article.
:::image type="content" source="media/add-application-portal-setup-sso/saml-configuration.png" alt-text="Configure single sign-on for an enterprise application.":::
To configure SSO in Azure AD:
1. For **Reply URL (Assertion Consumer Service URL)**, enter `https://samltoolkit.azurewebsites.net/SAML/Consume`. 1. For **Sign on URL**, enter `https://samltoolkit.azurewebsites.net/`. 1. Select **Save**.
-1. In the **SAML Signing Certificate** section, select **Download** for **Certificate (Raw)** to download the SAML signing certificate and save it to be used later.
+1. In the **SAML Certificates** section, select **Download** for **Certificate (Raw)** to download the SAML signing certificate and save it to be used later.
## Configure single sign-on in the application
To register a user account with the application:
:::image type="content" source="media/add-application-portal-setup-sso/toolkit-register.png" alt-text="Register a user account in the Azure AD SAML Toolkit application.":::
-1. For **Email**, enter the email address of the user that will access the application. For example, in a previous quickstart, the user account was created that uses the address of `contosouser1@contoso.com`. Be sure to change `contoso.com` to the domain of your tenant.
+1. For **Email**, enter the email address of the user that will access the application. Ensure that the user account is already assigned to the application.
1. Enter a **Password** and confirm it. 1. Select **Register**. ### Configure SAML settings
-To configure SAML setting for the application:
+To configure SAML settings for the application:
-1. Signed in with the credentials of the user account that you created, select **SAML Configuration** at the upper-left corner of the page.
+1. Signed in with the credentials of the user account that you already assigned to the application, select **SAML Configuration** at the upper-left corner of the page.
1. Select **Create** in the middle of the page. 1. For **Login URL**, **Azure AD Identifier**, and **Logout URL**, enter the values that you recorded earlier. 1. Select **Choose file** to upload the certificate that you previously downloaded.
You can test the single sign-on configuration from the **Set up single sign-on**
To test SSO:
-1. In the **Test single sign-on with Azure AD SAML Toolkit 1** section, on the **Set up single sign-on** pane, select **Test**.
+1. In the **Test single sign-on with Azure AD SAML Toolkit 1** section, on the **Set up single sign-on with SAML** pane, select **Test**.
1. Sign in to the application using the Azure AD credentials of the user account that you assigned to the application.
-## Clean up resources
-
-If you are planning to complete the next quickstart, keep the enterprise application that you created. Otherwise, you can consider deleting it to clean up your tenant.
## Next steps
-Learn how to configure the properties of an enterprise application.
-> [!div class="nextstepaction"]
-> [Configure an application](add-application-portal-configure.md)
+- [Manage self service access](manage-self-service-access.md)
+- [Configure user consent](configure-user-consent.md)
+- [Grant tenant-wide admin consent](grant-admin-consent.md)
active-directory Plan Sso Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/plan-sso-deployment.md
The following SSO protocols are available to use:
## Next steps -- Consider completing the single sign-on training in [Enable single sign-on for applications by using Azure Active Directory](/training/modules/enable-single-sign-on).
+- [Enable single sign-on for applications by using Azure Active Directory](add-application-portal-setup-sso.md).
active-directory What Is Application Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-application-management.md
To [manage access](what-is-access-management.md) for an application, you want to
You can [manage user consent settings](configure-user-consent.md) to choose whether users can allow an application or service to access user profiles and organizational data. When applications are granted access, users can sign in to applications integrated with Azure AD, and the application can access your organization's data to deliver rich data-driven experiences.
-Users often are unable to consent to the permissions an application is requesting. Configure the admin consent workflow to allow users to provide a justification and request an administrator's review and approval of an application. For training on how to configure admin consent workflow in your Azure AD tenant, see [Configure admin consent workflow](/training/modules/configure-admin-consent-workflow).
+Users often are unable to consent to the permissions an application is requesting. Configure the admin consent workflow to allow users to provide a justification and request an administrator's review and approval of an application. To learn how to configure admin consent workflow in your Azure AD tenant, see [Configure admin consent workflow](configure-admin-consent-workflow.md).
As an administrator, you can [grant tenant-wide admin consent](grant-admin-consent.md) to an application. Tenant-wide admin consent is necessary when an application requires permissions that regular users aren't allowed to grant, and allows organizations to implement their own review processes. Always carefully review the permissions the application is requesting before granting consent. When an application has been granted tenant-wide admin consent, all users are able to sign into the application unless it has been configured to require user assignment. ### Single sign-on
-Consider implementing SSO in your application. You can manually configure most applications for SSO. The most popular options in Azure AD are [SAML-based SSO and OpenID Connect-based SSO](../develop/active-directory-v2-protocols.md). Before you start, make sure that you understand the requirements for SSO and how to [plan for deployment](plan-sso-deployment.md). For training related to configuring SAML-based SSO for an enterprise application in your Azure AD tenant, see [Enable single sign-on for an application by using Azure Active Directory](/training/modules/enable-single-sign-on).
+Consider implementing SSO in your application. You can manually configure most applications for SSO. The most popular options in Azure AD are [SAML-based SSO and OpenID Connect-based SSO](../develop/active-directory-v2-protocols.md). Before you start, make sure that you understand the requirements for SSO and how to [plan for deployment](plan-sso-deployment.md). For more information on how to configure SAML-based SSO for an enterprise application in your Azure AD tenant, see [Enable single sign-on for an application by using Azure Active Directory](add-application-portal-setup-sso.md).
### User, group, and owner assignment
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
The following FQDN / application rules are required for using cluster extensions
| **`<region>.dp.kubernetesconfiguration.azure.us`** | **`HTTPS:443`** | This address is used to fetch configuration information from the Cluster Extensions service and report extension status to the service. | | **`mcr.microsoft.com, *.data.mcr.microsoft.com`** | **`HTTPS:443`** | This address is required to pull container images for installing cluster extension agents on AKS cluster.| ++
+> [!NOTE]
+> If any addon does not explicitly stated here, that means the core requirements are covering it.
+ ## Restrict egress traffic using Azure firewall Azure Firewall provides an Azure Kubernetes Service (`AzureKubernetesService`) FQDN Tag to simplify this configuration.
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/update-credentials.md
To check the expiration date of your service principal, use the [az ad sp creden
```azurecli SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \ --query servicePrincipalProfile.clientId -o tsv)
-az ad sp credential list --id "$SP_ID" --query "[].endDate" -o tsv
+az ad sp credential list --id "$SP_ID" --query "[].endDateTime" -o tsv
``` ### Reset the existing service principal credential
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
This policy can be used in the following policy [sections](./api-management-howt
## <a name="SetHttpProxy"></a> Set HTTP proxy
-The `proxy` policy allows you to route requests forwarded to backends via an HTTP proxy. Only HTTP (not HTTPS) is supported between the gateway and the proxy. Basic and NTLM authentication only.
+The `proxy` policy allows you to route requests forwarded to backends via an HTTP proxy. Only HTTP (not HTTPS) is supported between the gateway and the proxy. Basic and NTLM authentication only. To route the send-request via HTTP proxy, you must place the set HTTP proxy policy inside the send-request policy block.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
Previously updated : 01/05/2022 Last updated : 09/27/2022 # Subscriptions in Azure API Management
By publishing APIs through API Management, you can easily secure API access usin
* Rejected immediately by the API Management gateway. * Not forwarded to the back-end services.
-To access APIs, you'll need a subscription and a subscription key. A *subscription* is a named container for a pair of subscription keys.
-
-> [!NOTE]
-> Regularly regenerating keys is a common security precaution. Like most Azure services requiring a subscription key, API Management generates keys in pairs. Each application using the service can switch from *key A* to *key B* and regenerate key A with minimal disruption, and vice versa.
+To access APIs, developers need a subscription and a subscription key. A *subscription* is a named container for a pair of subscription keys.
In addition,
-* Developers can get subscriptions without approval from API publishers.
+* Developers can get subscriptions without needing approval from API publishers.
* API publishers can create subscriptions directly for API consumers. > [!TIP]
In addition,
> - [Client certificates](api-management-howto-mutual-certificates-for-clients.md) > - [Restrict caller IPs](./api-management-access-restriction-policies.md#RestrictCallerIPs)
+## Manage subscription keys
+
+Regularly regenerating keys is a common security precaution. Like most Azure services requiring a subscription key, API Management generates keys in pairs. Each application using the service can switch from *key A* to *key B* and regenerate key A with minimal disruption, and vice versa.
+> [!NOTE]
+> * API Management doesn't provide built-in features to manage the lifecycle of subscription keys, such as setting expiration dates or automatically rotating keys. You can develop workflows to automate these processes using tools such as Azure PowerShell or the Azure SDKs.
+> * To enforce time-limited access to APIs, API publishers may be able to use policies with subscription keys, or use a mechanism that provides built-in expiration such as token-based authentication.
+ ## Scope of subscriptions Subscriptions can be associated with various scopes: [product](api-management-howto-add-products.md), all APIs, or an individual API.
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
This guide shows how to mount Azure Storage Files as a network share in a Window
- Make static content like video and images readily available for your App Service app. - Write application log files or archive older application log to Azure File shares. - Share content across multiple apps or with other Azure services.-- Mount Azure Storage in a Windows container in a Standard tier or higher plan, including Isolated ([App Service environment v3](environment/overview.md)).
+- Mount Azure Storage in a Windows container, including Isolated ([App Service environment v3](environment/overview.md)).
The following features are supported for Windows containers:
app-service Tutorial Connect App Access Microsoft Graph As App Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-app-javascript.md
const appSettings = {
}, authRoutes: { redirect: "/.auth/login/aad/callback", // Enter the redirect URI here
- error: "/error", // enter the relative path to error handling route
unauthorized: "/unauthorized" // enter the relative path to unauthorized route }, }
getAuthenticatedClient = (accessToken) => {
[!INCLUDE [tutorial-clean-up-steps](./includes/tutorial-cleanup.md)]
automation Automation Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-availability-zones.md
+
+ Title: Availability zones support for Azure Automation
+description: This article provides an overview of Azure availability zones and regions for Azure Automation
+keywords: automation availability zones.
++ Last updated : 06/29/2022++++
+# Availability zones support for Azure Automation
+
+Azure Automation uses [Azure availability zones](../availability-zones/az-overview.md#availability-zones) to provide improved resiliency and high availability to a service instance in a specific Azure region.
+
+[Azure availability zones](../availability-zones/az-overview.md#availability-zones) is a
+high-availability offering that protects your applications and data from data center failures.
+Availability zones are unique physical locations within an Azure region and each region comprises of one or more data center(s) equipped with independent power, cooling, and networking. To ensure resiliency, there needs to be a minimum of three separate zones in all enabled regions.
+
+A zone redundant Automation account automatically distributes traffic to the Automation account through various management operations and runbook jobs amongst the availability zones in the supported region. The replication is handled at the service level to these physically separate zones, making the service resilient to a zone failure with no impact on the availability of the Automation accounts in the same region.
+
+In the event when a zone is down, there's no action required by you to recover from a zone failure and the service would be accessible through the other available zones. The service detects that the zone is down and automatically distributes the traffic to the available zones as needed.
+
+## Availability zone considerations
+
+- In all Availability zone supported regions, the zone redundancy for Automation accounts is enabled by default and it can't be disabled. It requires no action from your end as it's enabled and managed by the service.
+- All new Automation accounts with basic SKU are created with zone redundancy natively.
+- All existing Automation accounts would become zone redundant automatically. It requires no action from your end.
+- In a zone-down scenario, you might expect a brief performance degradation until the service self-healing rebalances the underlying capacity to adjust to healthy zones. This isn't dependent on zone restoration; the service self-healing state will compensate for a lost zone, using the capacity from other zones.
+- In a zone-wide failure scenario, you must follow the guidance provided to set up a disaster recovery for Automation accounts in a secondary region.
+- Availability zone support for Automation accounts supports only [Process Automation](/azure/automation/overview#process-automation) feature to provide an improved resiliency for runbook automation.
+
+## Supported regions with availability zones
+
+See [Regions and Availability Zones in Azure](/global-infrastructure/geographies/#geographies) for the Azure regions that have availability zones.
+Automation accounts currently support the following regions in preview:
+
+- China North 3
+- Qatar Central
+- West US 2
+- East US 2
+- East US
+- North Europe
+- West Europe
+- France Central
+- Japan East
+- UK South
+- Southeast Asia
+- Australia East
+- Central US
+- Brazil South
+- Germany West Central
+- West US 3
+
+## Create a zone redundant Automation account
+You can create a zone redundant Automation account using:
+- [Azure portal](/azure/automation/automation-create-standalone-account?tabs=azureportal)
+- [Azure Resource Manager (ARM) template](/azure/automation/quickstart-create-automation-account-template)
+
+> [!Note]
+> There is no option to select or see Availability zone in the creation flow of the Automation Accounts. ItΓÇÖs a default setting enabled and managed at the service level.
+
+## Pricing
+
+There's no additional cost associated to enable the zone redundancy feature in Automation account.
+
+## Service Level Agreement
+
+There is no change to the [Service Level Agreement](https://azure.microsoft.com/support/legal/sla/automation/v1_1/) with the support of Availability zones in Automation Account. The SLA depends on job start time with a guarantee that at least 99.9% of runbook jobs will start within 30 minutes of their planned start times.
+
+## Next steps
+
+- Learn more about [regions that support availability zones](/azure/availability-zones/az-region.md).
automation Disable Local Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/disable-local-authentication.md
# Disable local authentication in Automation
+> [!IMPORTANT]
+> - Update Management patching will not work when local authentication is disabled.
+> - When you disable local authentication, it impacts the starting a runbook using a webhook, Automation Desired State Configuration and agent-based Hybrid Runbook Workers. For more information, see the [available alternatives](#compatibility).
+ Azure Automation provides Microsoft Azure Active Directory (Azure AD) authentication support for all Automation service public endpoints. This critical security enhancement removes certificate dependencies and gives organizations control to disable local authentication methods. This feature provides you with seamless integration when centralized control and management of identities and resource credentials through Azure AD is required. Azure Automation provides an optional feature to "Disable local authentication" at the Automation account level using the Azure policy [Configure Azure Automation account to disable local authentication](../automation/policy-reference.md#azure-automation). By default, this flag is set to false at the account, so you can use both local authentication and Azure AD authentication. If you choose to disable local authentication, then the Automation service only accepts Azure AD based authentication.
In the Azure portal, you may receive a warning message on the landing page for t
Disabling local authentication doesn't take effect immediately. Allow a few minutes for the service to block future authentication requests. >[!NOTE]
-> Currently, PowerShell support for the new API version (2021-06-22) or the flag ΓÇô `DisableLocalAuth` is not available. However, you can use the Rest-API with this API version to update the flag.
+> - Currently, PowerShell support for the new API version (2021-06-22) or the flag ΓÇô `DisableLocalAuth` is not available. However, you can use the Rest-API with this API version to update the flag.
## Re-enable local authentication
The following table describes the behaviors or features that are prevented from
|Using Automation Desired State Configuration.| Use [Azure Policy Guest configuration](../governance/machine-configuration/overview.md).  | |Using agent-based Hybrid Runbook Workers.| Use [extension-based Hybrid Runbook Workers (Preview)](./extension-based-hybrid-runbook-worker-install.md).|
-## Limitations
-
-Update Management patching will not work when local authentication is disabled.
- ## Next steps - [Azure Automation account authentication overview](./automation-security-overview.md)
automation Automation Tutorial Runbook Textual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md
You can use the `ForEach -Parallel` construct to process commands for each item
``` 1. If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you prefer to use a user-assigned managed identity, then:
- 1. From line 9, remove `(Connect-AzAccount -Identity)`,
- 1. Replace it with `(Connect-AzAccount -Identity -AccountId <ClientId>)`, and
+ 1. From line 9, remove `Connect-AzAccount -Identity`,
+ 1. Replace it with `Connect-AzAccount -Identity -AccountId <ClientId>`, and
1. Enter the Client ID you obtained earlier. 1. Select **Save**, then **Publish**, and then **Yes** when prompted.
automation Update Agent Issues Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues-linux.md
This article discusses how to run the troubleshooter for Azure machines from the
## Start the troubleshooter
-For Azure machines, select the **troubleshoot** link under the **Update Agent Readiness** column in the portal to open the Troubleshoot Update Agent page. For non-Azure machines, the link brings you to this article. To troubleshoot a non-Azure machine, see the instructions in the "Troubleshoot offline" section.
+For Azure machines, select the **troubleshoot** link under the **Update Agent Readiness** column in the portal to open the Troubleshoot Update Agent page. For non-Azure machines, the link brings you to this article. To troubleshoot a non-Azure machine, see the instructions in the **Troubleshoot offline** section.
-![VM list page](../media/update-agent-issues-linux/vm-list.png)
> [!NOTE] > The checks require the VM to be running. If the VM isn't running, **Start the VM** appears. On the Troubleshoot Update Agent page, select **Run Checks** to start the troubleshooter. The troubleshooter uses [Run command](../../virtual-machines/linux/run-command.md) to run a script on the machine to verify the dependencies. When the troubleshooter is finished, it returns the result of the checks.
-![Troubleshoot page](../media/update-agent-issues-linux/troubleshoot-page.png)
+ When the checks are finished, the results are returned in the window. The check sections provide information on what each check is looking for.
-![Update agent checks page](../media/update-agent-issues-linux/update-agent-checks.png)
+ ## Prerequisite checks
When the checks are finished, the results are returned in the window. The check
The operating system check verifies if the Hybrid Runbook Worker is running one of the [supported operating systems](../update-management/operating-system-requirements.md#supported-operating-systems).
+### Dmidecode check
+
+To verify if a VM is an Azure VM, check for Asset tag value using the below command:
+
+```
+sudo dmidecode
+```
+
+If the asset tag is different than 7783-7084-3265-9085-8269-3286-77, then reboot VM to initiate re-registration.
++ ## Monitoring agent service health checks
-### Log Analytics agent
+### Monitoring Agent
+
+To fix this, install Azure Log Analytics Linux agent and ensure it communicates the required endpoints. For more information, see [Install Log Analytics agent on Linux computers](../../azure-monitor/agents/agent-linux.md).
-This check ensures that the Log Analytics agent for Linux is installed. For instructions on how to install it, see [Install the agent for Linux](../../azure-monitor/vm/monitor-virtual-machine.md#agents).
+This task checks if the folder is present -
-### Log Analytics agent status
+*/etc/opt/microsoft/omsagent/conf/omsadmin.conf*
-This check ensures that the Log Analytics agent for Linux is running. If the agent isn't running, you can run the following command to attempt to restart it. For more information on troubleshooting the agent, see [Linux - Troubleshoot Hybrid Runbook Worker issues](hybrid-runbook-worker.md#linux).
+### Monitoring Agent status
+
+To fix this issue, you must start the OMS Agent service by using the following command:
-```bash
-sudo /opt/microsoft/omsagent/bin/service_control restart
+```
+ sudo /opt/microsoft/omsagent/bin/service_control restart
```
-### Multihoming
+To validate you can perform process check using the below command:
+
+```
+process_name = "omsagent"
+ps aux | grep %s | grep -v grep" % (process_name)
+```
+For more information, see [Troubleshoot issues with the Log Analytics agent for Linux](../../azure-monitor/agents/agent-linux-troubleshoot.md)
++
+### Multihoming
This check determines if the agent is reporting to multiple workspaces. Update Management doesn't support multihoming.
+To fix this issue, purge the OMS Agent completely and reinstall it with the [workspace linked with Update management](../../azure-monitor/agents/agent-linux-troubleshoot.md#purge-and-reinstall-the-linux-agent)
++
+Validate that there are no more multihoming by checking the directories under this path:
+
+ */var/opt/microsoft/omsagent*.
+
+As they are the directories of workspaces, the number of directories equals the number of workspaces on-boarded to OMSAgent.
+ ### Hybrid Runbook Worker
+To fix the issue, run the following command:
-This check verifies if the Log Analytics agent for Linux has the Hybrid Runbook Worker package. This package is required for Update Management to work. To learn more, see [Log Analytics agent for Linux isn't running](hybrid-runbook-worker.md#oms-agent-not-running).
+```
+sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/ PerformRequiredConfigurationChecks.py
+```
-Update Management downloads Hybrid Runbook Worker packages from the operations endpoint. Therefore, if the Hybrid Runbook Worker is not running and the [operations endpoint](#operations-endpoint) check fails, the update can fail.
+This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
+
+Validate to check if the following two paths exists:
+
+```
+/opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/VERSION </br> /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/configuration.py
+```
### Hybrid Runbook Worker status This check makes sure the Hybrid Runbook Worker is running on the machine. The processes in the example below should be present if the Hybrid Runbook Worker is running correctly.
+```
+ps -ef | grep python
+```
-```bash
+```
nxautom+ 8567 1 0 14:45 ? 00:00:00 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/main.py /var/opt/microsoft/omsagent/state/automationworker/oms.conf rworkspace:<workspaceId> <Linux hybrid worker version> nxautom+ 8593 1 0 14:45 ? 00:00:02 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/hybridworker.py /var/opt/microsoft/omsagent/state/automationworker/worker.conf managed rworkspace:<workspaceId> rversion:<Linux hybrid worker version> nxautom+ 8595 1 0 14:45 ? 00:00:02 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/hybridworker.py /var/opt/microsoft/omsagent/<workspaceId>/state/automationworker/diy/worker.conf managed rworkspace:<workspaceId> rversion:<Linux hybrid worker version> ```
+Update Management downloads Hybrid Runbook Worker packages from the operations endpoint. Therefore, if the Hybrid Runbook Worker is not running and the [operations endpoint](#operations-endpoint) check fails, the update can fail.
+
+To fix this issue, run the following command:
+
+```
+sudo su omsagent -c python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py
+```
+
+This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
+
+If the issue still persists, run the [omsagent Log Collector tool](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/tools/LogCollector/OMS_Linux_Agent_Log_Collector.md)
+++ ## Connectivity checks
+### Proxy enabled check
+
+To fix the issue, either remove the proxy or make sure that the proxy address is able to access the [prerequisite URL](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory).
+
+You can validate the task by running the below command:
+
+```
+HTTP_PROXY
+```
+
+### IMDS connectivity check
+
+To fix this issue, allow access to IP **169.254.169.254**. For more information, see [Access Azure Instance Metadata Service](../../virtual-machines/windows/instance-metadata-service.md#azure-instance-metadata-service-windows)
+
+After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
+
+```
+ curl -H \"Metadata: true\" http://169.254.169.254/metadata/instance?api-version=2018-02-01
+```
+ ### General internet connectivity
-This check makes sure that the machine has access to the internet.
+This check makes sure that the machine has access to the internet and can be ignored if you have blocked internet and allowed only specific URLs.
+
+CURL on any http url.
### Registration endpoint This check determines if the Hybrid Runbook Worker can properly communicate with Azure Automation in the Log Analytics workspace.
-Proxy and firewall configurations must allow the Hybrid Runbook Worker agent to communicate with the registration endpoint. For a list of addresses and ports to open, see [Network planning](../automation-hybrid-runbook-worker.md#network-planning).
+Proxy and firewall configurations must allow the Hybrid Runbook Worker agent to communicate with the registration endpoint. For a list of addresses and ports to open, see [Network planning](../automation-hybrid-runbook-worker.md#network-planning)
+
+Fix this issue by allowing the prerequisite URLs. For more information, see [Update Management and Change Tracking and Inventory](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory)
+
+Post the network changes you can either re-run the troubleshooter or CURL on provided jrds endpoint.
### Operations endpoint
Proxy and firewall configurations must allow the Hybrid Runbook Worker agent to
This check verifies that your machine has access to the endpoints needed by the Log Analytics agent.
+Fix this issue by allowing the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory).
+
+Post making Network changes you can either rerun the Troubleshooter or
+Curl on provided OMS endpoint
+ ### Log Analytics endpoint 2 This check verifies that your machine has access to the endpoints needed by the Log Analytics agent.
-### Log Analytics endpoint 3
+Fix this issue by allowing the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory).
-This check verifies that your machine has access to the endpoints needed by the Log Analytics agent.
+Post making Network changes you can either rerun the Troubleshooter or
+Curl on provided OMS endpoint
++
+### Software repositories
+
+Fix this issue by allowing the prerequisite Repo URL. For RHEL, see [here](https://learn.microsoft.com/azure/virtual-machines/workloads/redhat/redhat-rhui#troubleshoot-connection-problems-to-azure-rhui).
+
+Post making Network changes you can either rerun the Troubleshooter or
+
+Curl on software repositories configured in package manager.
+
+Refreshing repos would help to confirm the communication.
+
+```
+sudo apt-get check
+sudo yum check-update
+```
+> [!NOTE]
+> The check is available only in offline mode.
## <a name="troubleshoot-offline"></a>Troubleshoot offline You can use the troubleshooter offline on a Hybrid Runbook Worker by running the script locally. The Python script, [UM_Linux_Troubleshooter_Offline.py](https://github.com/Azure/updatemanagement/blob/main/UM_Linux_Troubleshooter_Offline.py), can be found in GitHub.
-> [!NOTE]
-> The current version of the troubleshooter script does not support Ubuntu 20.04.
->
+ > [!NOTE]
+ > The current version of the troubleshooter script does not support Ubuntu 20.04.
+ An example of the output of this script is shown in the following example:
automation Update Agent Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues.md
There can be many reasons why your machine isn't showing up as ready (healthy) d
This article discusses how to run the troubleshooter for Azure machines from the Azure portal, and non-Azure machines in the [offline scenario](#troubleshoot-offline). > [!NOTE]
-> The troubleshooter script now includes checks for Windows Server Update Services (WSUS) and for the autodownload and install keys.
+> The troubleshooter script now includes checks for Windows Server Update Services (WSUS) and for the auto download and install keys.
## Start the troubleshooter For Azure machines, you can launch the Troubleshoot Update Agent page by selecting the **Troubleshoot** link under the **Update Agent Readiness** column in the portal. For non-Azure machines, the link brings you to this article. See [Troubleshoot offline](#troubleshoot-offline) to troubleshoot a non-Azure machine.
-![Screenshot of the Update Management list of virtual machines](../media/update-agent-issues/vm-list.png)
> [!NOTE] > To check the health of the Hybrid Runbook Worker, the VM must be running. If the VM isn't running, a **Start the VM** button appears. On the Troubleshoot Update Agent page, select **Run checks** to start the troubleshooter. The troubleshooter uses [Run Command](../../virtual-machines/windows/run-command.md) to run a script on the machine, to verify dependencies. When the troubleshooter is finished, it returns the result of the checks.
-![Screenshot of the Troubleshoot Update Agent page](../media/update-agent-issues/troubleshoot-page.png)
Results are shown on the page when they're ready. The checks sections show what's included in each check.
-![Screenshot of the Troubleshoot Update Agent checks](../media/update-agent-issues/update-agent-checks.png)
## Prerequisite checks ### Operating system
-The operating system check verifies whether the Hybrid Runbook Worker is running [one of the supported operating systems.](../update-management/operating-system-requirements.md)
-one of the supported operating systems
+The operating system check verifies whether the Hybrid Runbook Worker is running [one of the supported operating systems](../update-management/operating-system-requirements.md#system-requirements)
### .NET 4.6.2 The .NET Framework check verifies that the system has [.NET Framework 4.6.2](https://dotnet.microsoft.com/download/dotnet-framework/net462) or later installed.
+To fix, install .NET Framework 4.6 or later. </br> Download the [.NET Framework](https://www.docs.microsoft.com/dotnet/framework/install/guide-for-developers).
+ ### WMF 5.1
-The WMF check verifies that the system has the required version of the Windows Management Framework (WMF), which is [Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616).
+The WMF check verifies that the system has the required version of the Windows Management Framework (WMF).
+
+To fix, you need to download and install [Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616) as it requires Windows PowerShell 5.1 for Azure Update Management to work.
### TLS 1.2 This check determines whether you're using TLS 1.2 to encrypt your communications. TLS 1.0 is no longer supported by the platform. Use TLS 1.2 to communicate with Update Management.
+To fix, follow the steps to [Enable TLS 1.2](../../azure-monitor/agents/agent-windows.md#configure-agent-to-use-tls-12)
++
+## Monitoring agent service health checks
+
+### Monitoring Agent
+To fix the issue, start **HealthService** service
+
+```
+Start-Service -Name *HealthService* -ErrorAction SilentlyContinue
+```
+
+### Hybrid Runbook Worker
+To fix the issue, do a force re-registration of Hybrid Runbook Worker.
+
+```
+Remove-Item -Path "HKLM:\software\microsoft\hybridrunbookworker" -Recurse -Force
+Restart-service healthservice
+
+```
+
+>[!NOTE]
+> This will remove the user Hybrid worker from the machine. Ensure to check and re-register it afterwards. There is no action needed if the machine has only the System Hybrid Runbook worker.
+
+To validate, check event id *15003 (HW start event) OR 15004 (hw stopped event) EXISTS in Microsoft-SMA/Operational event logs.*
+
+Raise a support ticket if the issue is not fixed still.
+
+### Monitoring Agent Service
+
+Check the event id 4502 (error event) in **Operations Manager** event logs and check the description.
+
+To troubleshoot, run the [MMA Agent Troubleshooter](../../azure-monitor/agents/agent-windows-troubleshoot.md).
+
+### VMs linked workspace
+See [Network requirements](../../azure-monitor/agents/agent-windows-troubleshoot.md#connectivity-issues).
+
+To validate: Check VMs connected workspace or Heartbeat table of corresponding log analytics.
+
+```
+Heartbeat | where Computer =~ ""
+```
+
+### Windows update service status
+
+ To fix this issue, start **wuaserv** service.
+
+```
+Start-Service -Name wuauserv -ErrorAction SilentlyContinue
+```
+ ## Connectivity checks
+The troubleshooter currently doesn't route traffic through a proxy server if one is configured.
+ ### Registration endpoint This check determines whether the agent can properly communicate with the agent service. Proxy and firewall configurations must allow the Hybrid Runbook Worker agent to communicate with the registration endpoint. For a list of addresses and ports to open, see [Network planning](../automation-hybrid-runbook-worker.md#network-planning).
+Allow the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory).
+After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
+
+```
+$workspaceId =- ""ΓÇ»
+$endpoint = $workspaceId + ΓÇ£.agentsvc.azure-automation.netΓÇ¥ΓÇ»
+(Test-NetConnection -ComputerName $endpoint -Port 443 -WarningAction SilentlyContinue).TcpTestSucceeded
+```
+ ### Operations endpoint This check determines whether the agent can properly communicate with the Job Runtime Data Service. Proxy and firewall configurations must allow the Hybrid Runbook Worker agent to communicate with the Job Runtime Data Service. For a list of addresses and ports to open, see [Network planning](../automation-hybrid-runbook-worker.md#network-planning).
+Allow the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory). After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
+
+```
+$jrdsEndpointLocationMoniker = ΓÇ£ΓÇ¥
+
+# $jrdsEndpointLocationMoniker should be based on automation account location (jpe/ase/scus) etc.ΓÇ»
+
+$endpoint = $jrdsEndpointLocationMoniker + ΓÇ£-jobruntimedata-prod-su1.azure-automation.netΓÇ¥ΓÇ»
+
+(Test-NetConnection -ComputerName $endpoint -Port 443 -WarningAction SilentlyContinue).TcpTestSucceeded
+```
+
+### Https connection
+Simplifies the ongoing management of your network security rules. Allow the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory).
+
+After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
+
+```
+$uri = "https://eus2-jobruntimedata-prod-su1.azure-automation.net"
+Invoke-WebRequest -URI $uri -UseBasicParsing
+```
++
+### Proxy settings
+
+If the proxy is enabled, ensure that you have access to the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory)
++
+To check if the proxy is set correctly, use the below commands:
+
+```
+netsh winhttp show proxy
+```
+
+or check the registry key **ProxyEnable** is set to 1 in
+
+```
+HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings
+
+```
+
+### IMDS endpoint connectivity
+
+To fix the issue, allow access to IP **169.254.169.254** </br> For more information see, [access Azure instance metadata service](../../virtual-machines/windows/instance-metadata-service.md#access-azure-instance-metadata-service)
++
+After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
+
+```
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri http://169.254.169.254/metadata/instance?api-version=2018-02-01
+```
+ ## VM service health checks ### Monitoring agent service status
To learn more about this event, see the [Event 4502 in the Operations Manager lo
## Access permissions checks
-> [!NOTE]
-> The troubleshooter currently doesn't route traffic through a proxy server if one is configured.
+### Machine key folder
+
+This check determines whether the local system account has access to: *C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys*
+
+To fix, grant the SYSTEM account the required permissions (Read, Write & Modify or Full Control) on folderΓÇ»*C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys*
+
+Use the below commands to check the permissions on the folder:
+
+```azurepowershell
+
+$folder = ΓÇ£C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeysΓÇ¥
+
+(Get-Acl $folder).Access |? {($_.IdentityReference -match $User) -or ($_.IdentityReference -match "Everyone")} | Select IdentityReference, FileSystemRights
+
+```
+
+## Machine Update settings
+
+### Automatically reboot after install
-### Crypto folder access
+To fix, remove the registry keys from:
+*HKLM:\Software\Policies\Microsoft\Windows\WindowsUpdate\AU*
+
+Configure reboot according to Update Management schedule configuration.
+
+```
+AlwaysAutoRebootAtScheduledTimeΓÇ»
+AlwaysAutoRebootAtScheduledTimeMinutes
+```
+
+For more information, seeΓÇ»[Configure reboot settings](../update-management/configure-wuagent.md#configure-reboot-settings)
++
+### WSUS server configuration
+
+If the environment is set to get updates from WSUS, ensure that it is approved in WSUS before the update deployment. For more information, see [WSUS configuration settings](../update-management/configure-wuagent.md#make-wsus-configuration-settings). If your environment is not using WSUS, ensure that you remove the WSUS server settings and [reset Windows update component](https://learn.microsoft.com/windows/deployment/update/windows-update-resources#how-do-i-reset-windows-update-components).
+
+### Automatically download and install
+
+To fix the issue, disable the **AutoUpdate** feature. Set it to Disabled in the local group policy Configure Automatic Updates. For more information, see [Configure automatic updates](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates).
-The Crypto folder access check determines whether the local system account has access to C:\ProgramData\Microsoft\Crypto\RSA.
## <a name="troubleshoot-offline"></a>Troubleshoot offline
-You can use the troubleshooter on a Hybrid Runbook Worker offline by running the script locally. Get the following script from GitHub: [UM_Windows_Troubleshooter_Offline.ps1](https://github.com/Azure/updatemanagement/blob/main/UM_Windows_Troubleshooter_Offline.ps1). To run the script, you must have WMF 4.0 or later installed. To download the latest version of PowerShell, see [Installing various versions of PowerShell](/powershell/scripting/install/installing-powershell).
+You can use the troubleshooter on a Hybrid Runbook Worker offline by running the script locally. Get the following script from GitHub: [UM_Windows_Troubleshooter_Offline.ps1](https://github.com/Azure/updatemanagement/blob/main/UM_Windows_Troubleshooter_Offline.ps1). To run the script, you must have WMF 5.0 or later installed. To download the latest version of PowerShell, see [Installing various versions of PowerShell](/powershell/scripting/install/installing-powershell).
The output of this script looks like the following example: ```output
-RuleId : OperatingSystemCheck
-RuleGroupId : prerequisites
-RuleName : Operating System
-RuleGroupName : Prerequisite Checks
-RuleDescription : The Windows Operating system must be version 6.2.9200 (Windows Server 2012) or higher
-CheckResult : Passed
-CheckResultMessage : Operating System version is supported
-CheckResultMessageId : OperatingSystemCheck.Passed
-CheckResultMessageArguments : {}
-
-RuleId : DotNetFrameworkInstalledCheck
-RuleGroupId : prerequisites
-RuleName : .NET Framework 4.5+
-RuleGroupName : Prerequisite Checks
-RuleDescription : .NET Framework version 4.5 or higher is required
-CheckResult : Passed
-CheckResultMessage : .NET Framework version 4.5+ is found
-CheckResultMessageId : DotNetFrameworkInstalledCheck.Passed
-CheckResultMessageArguments : {}
-
-RuleId : WindowsManagementFrameworkInstalledCheck
-RuleGroupId : prerequisites
-RuleName : WMF 5.1
-RuleGroupName : Prerequisite Checks
-RuleDescription : Windows Management Framework version 4.0 or higher is required (version 5.1 or higher is preferable)
-CheckResult : Passed
-CheckResultMessage : Detected Windows Management Framework version: 5.1.17763.1
-CheckResultMessageId : WindowsManagementFrameworkInstalledCheck.Passed
-CheckResultMessageArguments : {5.1.17763.1}
-
-RuleId : AutomationAgentServiceConnectivityCheck1
-RuleGroupId : connectivity
-RuleName : Registration endpoint
-RuleGroupName : connectivity
-RuleDescription :
-CheckResult : Failed
-CheckResultMessage : Unable to find Workspace registration information in registry
-CheckResultMessageId : AutomationAgentServiceConnectivityCheck1.Failed.NoRegistrationFound
-CheckResultMessageArguments : {}
-
-RuleId : AutomationJobRuntimeDataServiceConnectivityCheck
-RuleGroupId : connectivity
-RuleName : Operations endpoint
-RuleGroupName : connectivity
-RuleDescription : Proxy and firewall configuration must allow Automation Hybrid Worker agent to communicate with eus2-jobruntimedata-prod-su1.azure-automation.net
-CheckResult : Passed
-CheckResultMessage : TCP Test for eus2-jobruntimedata-prod-su1.azure-automation.net (port 443) succeeded
-CheckResultMessageId : AutomationJobRuntimeDataServiceConnectivityCheck.Passed
-CheckResultMessageArguments : {eus2-jobruntimedata-prod-su1.azure-automation.net}
-
-RuleId : MonitoringAgentServiceRunningCheck
-RuleGroupId : servicehealth
-RuleName : Monitoring Agent service status
-RuleGroupName : VM Service Health Checks
-RuleDescription : HealthService must be running on the machine
-CheckResult : Failed
-CheckResultMessage : Log Analytics for Windows service (HealthService) is not running
-CheckResultMessageId : MonitoringAgentServiceRunningCheck.Failed
-CheckResultMessageArguments : {Log Analytics agent for Windows, HealthService}
-
-RuleId : MonitoringAgentServiceEventsCheck
-RuleGroupId : servicehealth
-RuleName : Monitoring Agent service events
-RuleGroupName : VM Service Health Checks
-RuleDescription : Event Log must not have event 4502 logged in the past 24 hours
-CheckResult : Failed
-CheckResultMessage : Log Analytics agent for Windows service Event Log (Operations Manager) does not exist on the machine
-CheckResultMessageId : MonitoringAgentServiceEventsCheck.Failed.NoLog
-CheckResultMessageArguments : {Log Analytics agent for Windows, Operations Manager, 4502}
-
-RuleId : CryptoRsaMachineKeysFolderAccessCheck
-RuleGroupId : permissions
-RuleName : Crypto RSA MachineKeys Folder Access
-RuleGroupName : Access Permission Checks
-RuleDescription : SYSTEM account must have WRITE and MODIFY access to 'C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys'
-CheckResult : Passed
-CheckResultMessage : Have permissions to access C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys
-CheckResultMessageId : CryptoRsaMachineKeysFolderAccessCheck.Passed
-CheckResultMessageArguments : {C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys}
-
-RuleId : TlsVersionCheck
-RuleGroupId : prerequisites
-RuleName : TLS 1.2
-RuleGroupName : Prerequisite Checks
-RuleDescription : Client and Server connections must support TLS 1.2
-CheckResult : Passed
-CheckResultMessage : TLS 1.2 is enabled by default on the Operating System.
-CheckResultMessageId : TlsVersionCheck.Passed.EnabledByDefault
-CheckResultMessageArguments : {}
-```
+RuleId : OperatingSystemCheck
+RuleGroupId : prerequisites
+RuleName : Operating System
+RuleGroupName : Prerequisite Checks
+RuleDescription : The Windows Operating system must be version 6.1.7600 (Windows Server 2008 R2) or higher
+CheckResult : Passed
+CheckResultMessage : Operating System version is supported
+CheckResultMessageId : OperatingSystemCheck.Passed
+CheckResultMessageArguments : {}
+
+
+
+RuleId : DotNetFrameworkInstalledCheck
+RuleGroupId : prerequisites
+RuleName : .Net Framework 4.6.2+
+RuleGroupName : Prerequisite Checks
+RuleDescription : .NET Framework version 4.6.2 or higher is required
+CheckResult : Passed
+CheckResultMessage : .NET Framework version 4.6.2+ is found
+CheckResultMessageId : DotNetFrameworkInstalledCheck.Passed
+CheckResultMessageArguments : {}
+
+
+
+RuleId : WindowsManagementFrameworkInstalledCheck
+RuleGroupId : prerequisites
+RuleName : WMF 5.1
+RuleGroupName : Prerequisite Checks
+RuleDescription : Windows Management Framework version 4.0 or higher is required (version 5.1 or higher is preferable)
+CheckResult : Passed
+CheckResultMessage : Detected Windows Management Framework version: 5.1.22621.169
+CheckResultMessageId : WindowsManagementFrameworkInstalledCheck.Passed
+CheckResultMessageArguments : {5.1.22621.169}
+
+
+
+RuleId : AutomationAgentServiceConnectivityCheck1
+RuleGroupId : connectivity
+RuleName : Registration endpoint
+RuleGroupName : connectivity
+RuleDescription :
+CheckResult : Failed
+CheckResultMessage : Unable to find Workspace registration information
+CheckResultMessageId : AutomationAgentServiceConnectivityCheck1.Failed.NoRegistrationFound
+CheckResultMessageArguments :
+
+
+
+RuleId : AutomationJobRuntimeDataServiceConnectivityCheck
+RuleGroupId : connectivity
+RuleName : Operations endpoint
+RuleGroupName : connectivity
+RuleDescription : Proxy and firewall configuration must allow Automation Hybrid Worker agent to communicate with
+ eus2-jobruntimedata-prod-su1.azure-automation.net
+CheckResult : Passed
+CheckResultMessage : TCP Test for eus2-jobruntimedata-prod-su1.azure-automation.net (port 443) succeeded
+CheckResultMessageId : AutomationJobRuntimeDataServiceConnectivityCheck.Passed
+CheckResultMessageArguments : {eus2-jobruntimedata-prod-su1.azure-automation.net}
+
+
+
+RuleId : MonitoringAgentServiceRunningCheck
+RuleGroupId : servicehealth
+RuleName : Monitoring Agent service status
+RuleGroupName : VM Service Health Checks
+RuleDescription : HealthService must be running on the machine
+CheckResult : Passed
+CheckResultMessage : Microsoft Monitoring Agent service (HealthService) is running
+CheckResultMessageId : MonitoringAgentServiceRunningCheck.Passed
+CheckResultMessageArguments : {Microsoft Monitoring Agent, HealthService}
+
+
+
+RuleId : SystemHybridRunbookWorkerRunningCheck
+RuleGroupId : servicehealth
+RuleName : Hybrid runbook worker status
+RuleGroupName : VM Service Health Checks
+RuleDescription : Hybrid runbook worker must be in running state.
+CheckResult : Passed
+CheckResultMessage : Hybrid runbook worker is running.
+CheckResultMessageId : SystemHybridRunbookWorkerRunningCheck.Passed
+CheckResultMessageArguments : {}
+
+
+
+RuleId : MonitoringAgentServiceEventsCheck
+RuleGroupId : servicehealth
+RuleName : Monitoring Agent service events
+RuleGroupName : VM Service Health Checks
+RuleDescription : Event Log must not have event 4502 logged in the past 24 hours
+CheckResult : Passed
+CheckResultMessage : Microsoft Monitoring Agent service Event Log (Operations Manager) does not have event 4502 logged in the last 24 hours.
+CheckResultMessageId : MonitoringAgentServiceEventsCheck.Passed
+CheckResultMessageArguments : {Microsoft Monitoring Agent, Operations Manager, 4502}
+
+
+
+RuleId : LinkedWorkspaceCheck
+RuleGroupId : servicehealth
+RuleName : VM's Linked Workspace
+RuleGroupName : VM Service Health Checks
+RuleDescription : Get linked workspace info of the VM
+CheckResult : Failed
+CheckResultMessage : VM is not reporting to any workspace.
+CheckResultMessageId : LinkedWorkspaceCheck.Failed.NoWorkspace
+CheckResultMessageArguments : {}
+
+
+RuleId : CryptoRsaMachineKeysFolderAccessCheck
+RuleGroupId : permissions
+RuleName : Crypto RSA MachineKeys Folder Access
+RuleGroupName : Access Permission Checks
+RuleDescription : SYSTEM account must have WRITE and MODIFY access to 'C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys'
+CheckResult : Passed
+CheckResultMessage : Have permissions to access C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys
+CheckResultMessageId : CryptoRsaMachineKeysFolderAccessCheck.Passed
+CheckResultMessageArguments : {C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys}
++
+RuleId : TlsVersionCheck
+RuleGroupId : prerequisites
+RuleName : TLS 1.2
+RuleGroupName : Prerequisite Checks
+RuleDescription : Client and Server connections must support TLS 1.2
+CheckResult : Passed
+CheckResultMessage : TLS 1.2 is enabled by default on the Operating System.
+CheckResultMessageId : TlsVersionCheck.Passed.EnabledByDefault
+CheckResultMessageArguments : {}
+
+
+RuleId : AlwaysAutoRebootCheck
+RuleGroupId : machineSettings
+RuleName : AutoReboot
+RuleGroupName : Machine Override Checks
+RuleDescription : Automatic reboot should not be enable as it forces a reboot irrespective of update configuration
+CheckResult : Passed
+CheckResultMessage : Windows Update reboot registry keys are not set to automatically reboot
+CheckResultMessageId : AlwaysAutoRebootCheck.Passed
+CheckResultMessageArguments :
+
+
+
+RuleId : WSUSServerConfigured
+RuleGroupId : machineSettings
+RuleName : isWSUSServerConfigured
+RuleGroupName : Machine Override Checks
+RuleDescription : Increase awareness on WSUS configured on the server
+CheckResult : Passed
+CheckResultMessage : Windows Updates are downloading from the default Windows Update location. Ensure the server has access to the Windows Update service
+CheckResultMessageId : WSUSServerConfigured.Passed
+CheckResultMessageArguments :
+
+
+
+RuleId : AutomaticUpdateCheck
+RuleGroupId : machineSettings
+RuleName : AutoUpdate
+RuleGroupName : Machine Override Checks
+RuleDescription : AutoUpdate should not be enabled on the machine
+CheckResult : Passed
+CheckResultMessage : Windows Update is not set to automatically install updates as they become available
+CheckResultMessageId : AutomaticUpdateCheck.Passed
+CheckResultMessageArguments :
+
+
+
+RuleId : HttpsConnection
+RuleGroupId : connectivity
+RuleName : Https connection
+RuleGroupName : connectivity
+RuleDescription : Check if VM is able to make https requests.
+CheckResult : Passed
+CheckResultMessage : VM is able to make https requests.
+CheckResultMessageId : HttpsConnection.Passed
+CheckResultMessageArguments : {}
+
+
+
+RuleId : ProxySettings
+RuleGroupId : connectivity
+RuleName : Proxy settings
+RuleGroupName : connectivity
+RuleDescription : Check if Proxy is enabled on the VM.
+CheckResult : Passed
+CheckResultMessage : Proxy is not set.
+CheckResultMessageId : ProxySettings.Passed
+CheckResultMessageArguments : {}
+
+
+RuleId : IMDSConnectivity
+RuleGroupId : connectivity
+RuleName : IMDS endpoint connectivity
+RuleGroupName : connectivity
+RuleDescription : Check if VM is able to reach IMDS server to get VM information.
+CheckResult : PassedWithWarning
+CheckResultMessage : VM is not able to reach IMDS server. Consider this as a Failure if this is an Azure VM.
+CheckResultMessageId : IMDSConnectivity.PassedWithWarning
+CheckResultMessageArguments : {}
+
+
+
+RuleId : WUServiceRunningCheck
+RuleGroupId : servicehealth
+RuleName : WU service status
+RuleGroupName : WU Service Health Check
+RuleDescription : WU must not be in the disabled state.
+CheckResult : Passed
+CheckResultMessage : Windows Update service (wuauserv) is running.
+CheckResultMessageId : WUServiceRunningCheck.Passed
+CheckResultMessageArguments : {Windows Update, wuauserv}
+
+
+RuleId : LAOdsEndpointConnectivity
+RuleGroupId : connectivity
+RuleName : LA ODS endpoint
+RuleGroupName : connectivity
+RuleDescription : Proxy and firewall configuration must allow to communicate with LA ODS endpoint
+CheckResult : Failed
+CheckResultMessage : Unable to find Workspace registration information
+CheckResultMessageId : LAOdsEndpointConnectivity.Failed
+CheckResultMessageArguments :
+
+
+RuleId : LAOmsEndpointConnectivity
+RuleGroupId : connectivity
+RuleName : LA OMS endpoint
+RuleGroupName : connectivity
+RuleDescription : Proxy and firewall configuration must allow to communicate with LA OMS endpoint
+CheckResult : Failed
+CheckResultMessage : Unable to find Workspace registration information
+CheckResultMessageId : LAOmsEndpointConnectivity.Failed
+CheckResultMessageArguments :
+ ```
## Next steps
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
+## September 2022
+### Availability zones support for Azure Automation
+
+Azure Automation now supports [Azure availability zones](../availability-zones/az-overview.md#availability-zones) to provide improved resiliency and high availability to a service instance in a specific Azure region. [Learn more](https://learn.microsoft.com/azure/automation/automation-availability-zones).
## July 2022
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
Title: Connected Machine agent network requirements description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 09/09/2022 Last updated : 09/29/2022
The table below lists the URLs that must be available in order to install and us
|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service for extension and connectivity scenarios|Always| Public | |`azgn*.servicebus.windows.net`|Notification service for extension and connectivity scenarios|Always| Public | |`*.servicebus.windows.net`|For Windows Admin Center and SSH scenarios|If using SSH or Windows Admin Center from Azure|Public|
+|`*.waconazure.com`|For Windows Admin Center connectivity|If using Windows Admin Center|Public|
|`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured | |`dc.services.visualstudio.com`|Agent telemetry|Optional| Public |
azure-cache-for-redis Cache How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md
Previously updated : 09/08/2022 Last updated : 09/29/2022
Azure Cache for Redis supports upgrading the version of your Azure Cache for Redis from Redis 4 to Redis 6. Upgrading is permanent, and it might cause a brief connection issue similar to regular monthly maintenance. As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading.
-For more information, see [here](cache-how-to-import-export-data.md) for details on how to export.
+For more details on how to export, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
+
+> [!IMPORTANT]
+> As announced in [What's new](cache-whats-new.md#upgrade-your-azure-cache-for-redis-instances-to-use-redis-version-6-by-june-30-2023), we'll retire version 4 for Azure Cache for Redis instances on June 30, 2023. Before that date, you need to upgrade any of your cache instances to version 6.
+>
+> For more information on the retirement of Redis 4, see [Retirements](cache-retired-features.md).
+>
## Prerequisites
azure-cache-for-redis Cache Retired Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-retired-features.md
+
+ Title: What's been retired from Azure Cache for Redis?
+
+description: Information on retirements from Azure Cache for Redis
+++++ Last updated : 09/29/2022+++
+# Retirements
+
+## Redis version 4
+
+On June 30, 2023, we'll retire version 4 for Azure Cache for Redis instances. Before that date, you need to upgrade any of your cache instances to version 6.
+
+- All cache instances running Redis version 4 after June 30, 2023 will be upgraded automatically.
+- All cache instances running Redis version 4 that have geo-replication enabled will be upgraded automatically after August 30, 2023.
+
+We recommend that you upgrade your caches on your own to accommodate your schedule and the needs of your users to make the upgrade as convenient as possible.
+
+The open-source Redis version 4 was released several years ago and is now retired. Version 4 no longer receives critical bug or security fixes from the community. Azure Cache for Redis offers open-source Redis as a managed service on Azure. To stay in sync with the open-source offering, we'll also retire version 4.
+<!-- Can we be more specific about when open-source Redis 4 was retired -->
+
+Microsoft continues to backport security fixes from recent versions to version 4 until retirement. We encourage you to upgrade your cache to version 6 sooner, so you can use the rich feature set that Redis version 6 has to offer. For more information, See the Redis 6 GA announcement for more details.
+
+To upgrade your version 4 Azure Cache for Redis instance, see our [Upgrade an existing Redis 4 cache to Redis 6](cache-how-to-version.md#upgrade-an-existing-redis-4-cache-to-redis-6. If your cache instances have geo-replication enabled, youΓÇÖre required to unlink the caches before upgrade.
+
+### Important upgrade timelines
+
+From now through 30 June 2023, you can continue to use existing Azure Cache for Redis version 4 instances. Retirement will occur in following stages, so you have the maximum amount of time to upgrade.
+
+| Date | Description |
+|-- |-|
+| November 1. 2022 | Beginning November 1, 2022, all the versions of Azure Cache for Redis REST API, PowerShell, Azure CLI, and Azure SDK will create Redis instances using Redis version 6 by default. If you need a specific Redis version for your cache instance, see [Redis 6 becomes default for new cache instances](cache-whats-new.md#redis-6-becomes-default-for-new-cache-instances). |
+| March 1, 2023 | Beginning March 1, 2023, you won't be able to create new Azure Cache for Redis instances using Redis version 4. Also, you wonΓÇÖt be able to create new geo-replication links between cache instances using Redis version 4.|
+| June 30, 2023 | After June 30 2023, any remaining version 4 cache instances, which don't have geo-replication links, will be automatically upgraded to version 6.|
+| August 30, 2023 |After August 30, 2023, any remaining version 4 cache instances, which have geo-replication links, will be automatically upgraded to version 6. This upgrade operation will require unlinking and relinking the caches and customers could experience geo-replication link down time. |
+
+### Version 4 caches on cloud services
+
+If your cache instance is affected by the Cloud Service retirement, you're unable to upgrade to Redis 6 until after you migrate to a cache built on virtual machine scale set. In this case, send mail to azurecachemigration@microsoft.com, and we'll help you with the migration.
+
+For more information on what to do if your cache is on Cloud Services (classic), see [Azure Cache for Redis on Cloud Services (classic)](cache-faq.yml#what-should-i-do-with-any-instances-of-azure-cache-for-redis-that-depend-on-cloud-services--classic-).
+
+### How to check if a cache is running on version 4?
+
+You check the Redis version of your cache instance by selecting **Properties** from the resource menu in the Azure Cache for Redis portal.
+
+## Next steps
+<!-- Add a context sentence for the following links -->
+- [What's new](cache-whats-new.md)
+- [Azure Cache for Redis FAQ](cache-faq.yml)
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md old mode 100755new mode 100644
- Previously updated : 09/01/2022+ Last updated : 09/29/2022
Last updated 09/01/2022
## September 2022
+### Upgrade your Azure Cache for Redis instances to use Redis version 6 by June 30, 2023
+
+On June 30, 2023, we'll retire version 4 for Azure Cache for Redis instances. Before that date, you need to upgrade any of your cache instances to version 6.
+
+- All cache instances running Redis version 4 after June 30, 2023 will be upgraded automatically.
+- All cache instances running Redis version 4 that have geo-replication enabled will be upgraded automatically after August 30, 2023.
+
+We recommend that you upgrade your caches on your own to accommodate your schedule and the needs of your users to make the upgrade as convenient as possible.
+
+For more information, see [Retirements](cache-retired-features.md).
+ ### Support for managed identity in Azure Cache for Redis Authenticating storage account connections using managed identity has now reached General Availability (GA).
The default version of Redis that is used when creating a cache can change over
As of May 2022, Azure Cache for Redis rolls over to TLS certificates issued by DigiCert Global G2 CA Root. The current Baltimore CyberTrust Root expires in May 2025, requiring this change.
-We expect that most Azure Cache for Redis customers won't be affected. However, your application might be affected if you explicitly specify a list of acceptable certificate authorities (CAs), known as *certificate pinning*.
+We expect that most Azure Cache for Redis customers won't be affected. However, your application might be affected if you explicitly specify a list of acceptable certificate authorities (CAs), known as _certificate pinning_.
For more information, read this blog that contains instructions on [how to check whether your client application is affected](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-cache-for-redis-tls-upcoming-migration-to-digicert-global/ba-p/3171086). We recommend taking the actions recommended in the blog to avoid cache connectivity loss.
For more information, read this blog that contains instructions on [how to check
Active geo-replication for Azure Cache for Redis Enterprise is now generally available (GA).
-Active geo-replication is a powerful tool that enables Azure Cache for Redis clusters to be linked together for seamless active-active replication of data. Your applications can write to one Redis cluster and your data is automatically copied to the other linked clusters, and vice versa. For more information, see this [post](https://aka.ms/ActiveGeoGA) in the *Azure Developer Community Blog*.
+Active geo-replication is a powerful tool that enables Azure Cache for Redis clusters to be linked together for seamless active-active replication of data. Your applications can write to one Redis cluster and your data is automatically copied to the other linked clusters, and vice versa. For more information, see this [post](https://aka.ms/ActiveGeoGA) in the _Azure Developer Community Blog_.
## January 2022
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
There are few exceptions to the retirement policy outlined above. Here is a list
To learn more about specific language version support policy timeline, visit the following external resources: * .NET - [dotnet.microsoft.com](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) * Node - [github.com](https://github.com/nodejs/Release#release-schedule)
-* Java - [azul.com](https://www.azul.com/products/azul-support-roadmap/)
+* Java - [Microsoft technical documentation](/azure/developer/java/fundamentals/java-support-on-azure)
* PowerShell - [Microsoft technical documentation](/powershell/scripting/powershell-support-lifecycle#powershell-end-of-support-dates) * Python - [devguide.python.org](https://devguide.python.org/#status-of-python-branches)
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
To use the SDK, you install a small instrumentation package in your app and then
### [.NET](#tab/net)
-Integrated Auto-instrumentation is available for [Azure App Service .NET](azure-web-apps-net.md), [Azure App Service .NET Core](azure-web-apps-net-core.md), [Azure Functions](../../azure-functions/functions-monitoring.md#monitor-executions-in-azure-functions), and [Azure Virtual Machines](azure-vm-vmss-apps.md).
+Integrated Auto-instrumentation is available for [Azure App Service .NET](azure-web-apps-net.md), [Azure App Service .NET Core](azure-web-apps-net-core.md), [Azure Functions](../../azure-functions/functions-monitoring.md), and [Azure Virtual Machines](azure-vm-vmss-apps.md).
[Azure Monitor Application Insights Agent](status-monitor-v2-overview.md) is available for workloads running in on-premises virtual machines.
A preview [Open Telemetry](opentelemetry-enable.md?tabs=net) offering is also av
### [Java](#tab/java)
-Auto-instrumentation is available for any environment using [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md).
+Integrated Auto-Instrumentation is available for Java Apps hosted on [Azure App Service](azure-web-apps-java.md) and [Azure Functions](monitor-functions.md).
-Integrated Auto-Instrumentation is available for Java Apps hosted on [Azure App Service](azure-web-apps-java.md) and [Azure Functions](monitor-functions.md#distributed-tracing-for-java-applications-public-preview).
+Auto-instrumentation is available for any environment using [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md).
### [Node.js](#tab/nodejs)
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
Title: Data retention and storage in Azure Application Insights | Microsoft Docs
-description: Retention and privacy policy statement
+ Title: Data retention and storage in Application Insights | Microsoft Docs
+description: Retention and privacy policy statement for Application Insights.
Last updated 06/30/2020
# Data collection, retention, and storage in Application Insights
-When you install [Azure Application Insights][start] SDK in your app, it sends telemetry about your app to the Cloud. Naturally, responsible developers want to know exactly what data is sent, what happens to the data, and how they can keep control of it. In particular, could sensitive data be sent, where is it stored, and how secure is it?
+When you install the [Application Insights][start] SDK in your app, it sends telemetry about your app to the cloud. As a responsible developer, you want to know exactly what data is sent, what happens to the data, and how you can keep control of it. In particular, could sensitive data be sent, where is it stored, and how secure is it?
First, the short answer:
-* The standard telemetry modules that run "out of the box" are unlikely to send sensitive data to the service. The telemetry is concerned with load, performance and usage metrics, exception reports, and other diagnostic data. The main user data visible in the diagnostic reports are URLs; but your app shouldn't in any case put sensitive data in plain text in a URL.
+* The standard telemetry modules that run "out of the box" are unlikely to send sensitive data to the service. The telemetry is concerned with load, performance and usage metrics, exception reports, and other diagnostic data. The main user data visible in the diagnostic reports are URLs. But your app shouldn't, in any case, put sensitive data in plain text in a URL.
* You can write code that sends more custom telemetry to help you with diagnostics and monitoring usage. (This extensibility is a great feature of Application Insights.) It would be possible, by mistake, to write this code so that it includes personal and other sensitive data. If your application works with such data, you should apply a thorough review process to all the code you write.
-* While developing and testing your app, it's easy to inspect what's being sent by the SDK. The data appears in the debugging output windows of the IDE and browser.
-* You can select the location when you create a new Application Insights resource. Know more about Application Insights availability per region [here](https://azure.microsoft.com/global-infrastructure/services/?products=all).
-* Review the collected data, as this collection may include data that is allowed in some circumstances but not others. A good example of this circumstance is Device Name. The device name from a server does not affect privacy and is useful, but a device name from a phone or laptop may have privacy implications and be less useful. An SDK developed primarily to target servers, would collect device name by default, and this may need to be overwritten in both normal events and exceptions.
+* While you develop and test your app, it's easy to inspect what's being sent by the SDK. The data appears in the debugging output windows of the IDE and browser.
+* You can select the location when you create a new Application Insights resource. For more information about Application Insights availability per region, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all).
+* Review the collected data because it might include data that's allowed in some circumstances but not others. A good example of this circumstance is device name. The device name from a server doesn't affect privacy and is useful. A device name from a phone or laptop might have privacy implications and be less useful. An SDK developed primarily to target servers would collect device name by default. This capability might need to be overwritten in both normal events and exceptions.
-The rest of this article elaborates more fully on these answers. It's designed to be self-contained, so that you can show it to colleagues who aren't part of your immediate team.
+The rest of this article discusses these points more fully. The article is self-contained, so you can share it with colleagues who aren't part of your immediate team.
## What is Application Insights?
-[Azure Application Insights][start] is a service provided by Microsoft that helps you improve the performance and usability of your live application. It monitors your application all the time it's running, both during testing and after you've published or deployed it. Application Insights creates charts and tables that show you, for example, what times of day you get most users, how responsive the app is, and how well it's served by any external services that it depends on. If there are crashes, failures or performance issues, you can search through the telemetry data in detail to diagnose the cause. And the service will send you emails if there are any changes in the availability and performance of your app.
-In order to get this functionality, you install an Application Insights SDK in your application, which becomes part of its code. When your app is running, the SDK monitors its operation and sends telemetry to the Application Insights service. This is a cloud service hosted by [Microsoft Azure](https://azure.com). (But Application Insights works for any applications, not just applications that are hosted in Azure.)
+[Application Insights][start] is a service provided by Microsoft that helps you improve the performance and usability of your live application. It monitors your application all the time it's running, both during testing and after you've published or deployed it. Application Insights creates charts and tables that show you informative metrics. For example, you might see what times of day you get most users, how responsive the app is, and how well it's served by any external services that it depends on. If there are failures or performance issues, you can search through the telemetry data to diagnose the cause. The service sends you emails if there are any changes in the availability and performance of your app.
-The Application Insights service stores and analyzes the telemetry. To see the analysis or search through the stored telemetry, you sign in to your Azure account and open the Application Insights resource for your application. You can also share access to the data with other members of your team, or with specified Azure subscribers.
+To get this functionality, you install an Application Insights SDK in your application, which becomes part of its code. When your app is running, the SDK monitors its operation and sends telemetry to Application Insights, which is a cloud service hosted by [Microsoft Azure](https://azure.com). Application Insights also works for any applications, not just applications that are hosted in Azure.
-You can have data exported from the Application Insights service, for example to a database or to external tools. You provide each tool with a special key that you obtain from the service. The key can be revoked if necessary.
+Application Insights stores and analyzes the telemetry. To see the analysis or search through the stored telemetry, you sign in to your Azure account and open the Application Insights resource for your application. You can also share access to the data with other members of your team, or with specified Azure subscribers.
-Application Insights SDKs are available for a range of application types: web services hosted in your own Java EE or ASP.NET servers, or in Azure; web clients - that is, the code running in a web page; desktop apps and services; device apps such as Windows Phone, iOS, and Android. They all send telemetry to the same service.
+You can have data exported from Application Insights, for example, to a database or to external tools. You provide each tool with a special key that you obtain from the service. The key can be revoked if necessary.
+
+Application Insights SDKs are available for a range of application types:
+
+- Web services hosted in your own Java EE or ASP.NET servers, or in Azure
+- Web clients, that is, the code running in a webpage
+- Desktop apps and services
+- Device apps such as Windows Phone, iOS, and Android
+
+They all send telemetry to the same service.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## What data does it collect?+ There are three sources of data:
-* The SDK, which you integrate with your app either [in development](./asp-net.md) or [at run time](./status-monitor-v2-overview.md). There are different SDKs for different application types. There's also an [SDK for web pages](./javascript.md), which loads into the end user's browser along with the page.
+* The SDK, which you integrate with your app either [in development](./asp-net.md) or [at runtime](./status-monitor-v2-overview.md). There are different SDKs for different application types. There's also an [SDK for webpages](./javascript.md), which loads into the user's browser along with the page.
* Each SDK has many [modules](./configuration-with-applicationinsights-config.md), which use different techniques to collect different types of telemetry. * If you install the SDK in development, you can use its API to send your own telemetry, in addition to the standard modules. This custom telemetry can include any data you want to send. * In some web servers, there are also agents that run alongside the app and send telemetry about CPU, memory, and network occupancy. For example, Azure VMs, Docker hosts, and [Java application servers](./java-in-process-agent.md) can have such agents.
-* [Availability tests](./monitor-web-app-availability.md) are processes run by Microsoft that send requests to your web app at regular intervals. The results are sent to the Application Insights service.
+* [Availability tests](./monitor-web-app-availability.md) are processes run by Microsoft that send requests to your web app at regular intervals. The results are sent to Application Insights.
+
+### What kind of data is collected?
-### What kinds of data are collected?
The main categories are:
-* [Web server telemetry](./asp-net.md) - HTTP requests. Uri, time taken to process the request, response code, client IP address. `Session id`.
-* [Web pages](./javascript.md) - Page, user and session counts. Page load times. Exceptions. Ajax calls.
-* Performance counters - Memory, CPU, IO, Network occupancy.
-* Client and server context - OS, locale, device type, browser, screen resolution.
-* [Exceptions](./asp-net-exceptions.md) and crashes - **stack dumps**, `build id`, CPU type.
-* [Dependencies](./asp-net-dependencies.md) - calls to external services such as REST, SQL, AJAX. URI or connection string, duration, success, command.
-* [Availability tests](./monitor-web-app-availability.md) - duration of test and steps, responses.
-* [Trace logs](./asp-net-trace-logs.md) and [custom telemetry](./api-custom-events-metrics.md) - **anything you code into your logs or telemetry**.
+* [Web server telemetry](./asp-net.md): HTTP requests. URI, time taken to process the request, response code, and client IP address. `Session id`.
+* [Webpages](./javascript.md): Page, user, and session counts. Page load times. Exceptions. Ajax calls.
+* Performance counters: Memory, CPU, IO, and network occupancy.
+* Client and server context: OS, locale, device type, browser, and screen resolution.
+* [Exceptions](./asp-net-exceptions.md) and crashes: Stack dumps, `build id`, and CPU type.
+* [Dependencies](./asp-net-dependencies.md): Calls to external services such as REST, SQL, and AJAX. URI or connection string, duration, success, and command.
+* [Availability tests](./monitor-web-app-availability.md): Duration of test and steps, and responses.
+* [Trace logs](./asp-net-trace-logs.md) and [custom telemetry](./api-custom-events-metrics.md): Anything you code into your logs or telemetry.
-[More detail](#data-sent-by-application-insights).
+For more information, see the section [Data sent by Application Insights](#data-sent-by-application-insights).
## How can I verify what's being collected?
-If you're developing the app using Visual Studio, run the app in debug mode (F5). The telemetry appears in the Output window. From there, you can copy it and format it as JSON for easy inspection.
+
+If you're developing an app using Visual Studio, run the app in debug mode (F5). The telemetry appears in the **Output** window. From there, you can copy it and format it as JSON for easy inspection.
![Screenshot that shows running the app in debug mode in Visual Studio.](./media/data-retention-privacy/06-vs.png)
-There's also a more readable view in the Diagnostics window.
+There's also a more readable view in the **Diagnostics** window.
-For web pages, open your browser's debugging window.
+For webpages, open your browser's debugging window. Select F12 and open the **Network** tab.
-![Press F12 and open the Network tab.](./media/data-retention-privacy/08-browser.png)
+![Screenshot that shows the open Network tab.](./media/data-retention-privacy/08-browser.png)
-### Can I write code to filter the telemetry before it is sent?
-This would be possible by writing a [telemetry processor plugin](./api-filtering-sampling.md).
+### Can I write code to filter the telemetry before it's sent?
+
+You'll need to write a [telemetry processor plug-in](./api-filtering-sampling.md).
## How long is the data kept?
-Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table) of 30, 60, 90, 120, 180, 270, 365, 550 or 730 days. If you need to keep data longer than 730 days, you can use [Continuous Export](./export-telemetry.md) to copy it to a storage account during data ingestion.
-Data kept longer than 90 days will incur addition charges. Learn more about Application Insights pricing on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days. You can [select a retention duration](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table) of 30, 60, 90, 120, 180, 270, 365, 550, or 730 days. If you need to keep data longer than 730 days, you can use [Continuous Export](./export-telemetry.md) to copy it to a storage account during data ingestion.
+
+Data kept longer than 90 days incurs extra charges. For more information about Application Insights pricing, see the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
-Aggregated data (that is, counts, averages and other statistical data that you see in Metric Explorer) are retained at a grain of 1 minute for 90 days.
+Aggregated data (that is, counts, averages, and other statistical data that you see in metric explorer) are retained at a grain of 1 minute for 90 days.
[Debug snapshots](./snapshot-debugger.md) are stored for 15 days. This retention policy is set on a per-application basis. If you need to increase this value, you can request an increase by opening a support case in the Azure portal. ## Who can access the data?
-The data is visible to you and, if you have an organization account, your team members.
+
+The data is visible to you and, if you have an organization account, your team members.
It can be exported by you and your team members and could be copied to other locations and passed on to other people. #### What does Microsoft do with the information my app sends to Application Insights?
-Microsoft uses the data only in order to provide the service to you.
+
+Microsoft uses the data only to provide the service to you.
## Where is the data held?
-* You can select the location when you create a new Application Insights resource. Know more about Application Insights availability per region [here](https://azure.microsoft.com/global-infrastructure/services/?products=all).
+
+You can select the location when you create a new Application Insights resource. For more information about Application Insights availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all).
## How secure is my data?
-Application Insights is an Azure Service. Security policies are described in the [Azure Security, Privacy, and Compliance white paper](https://go.microsoft.com/fwlink/?linkid=392408).
+
+Application Insights is an Azure service. Security policies are described in the [Azure Security, Privacy, and Compliance white paper](https://go.microsoft.com/fwlink/?linkid=392408).
The data is stored in Microsoft Azure servers. For accounts in the Azure portal, account restrictions are described in the [Azure Security, Privacy, and Compliance document](https://go.microsoft.com/fwlink/?linkid=392408).
-Access to your data by Microsoft personnel is restricted. We access your data only with your permission and if it is necessary to support your use of Application Insights.
+Access to your data by Microsoft personnel is restricted. We access your data only with your permission and if it's necessary to support your use of Application Insights.
-Data in aggregate across all our customers' applications (such as data rates and average size of traces) is used to improve Application Insights.
+Data in aggregate across all our customers' applications, such as data rates and average size of traces, is used to improve Application Insights.
#### Could someone else's telemetry interfere with my Application Insights data?
-They could send additional telemetry to your account by using the instrumentation key, which can be found in the code of your web pages. With enough additional data, your metrics would not correctly represent your app's performance and usage.
+
+Someone could send more telemetry to your account by using the instrumentation key. This key can be found in the code of your webpages. With enough extra data, your metrics wouldn't correctly represent your app's performance and usage.
If you share code with other projects, remember to remove your instrumentation key. ## Is the data encrypted?
-All data is encrypted at rest and as it moves between data centers.
-#### Is the data encrypted in transit from my application to Application Insights servers?
-Yes, we use https to send data to the portal from nearly all SDKs, including web servers, devices, and HTTPS web pages.
+All data is encrypted at rest and as it moves between datacenters.
-## Does the SDK create temporary local storage?
+#### Is the data encrypted in transit from my application to Application Insights servers?
-Yes, certain Telemetry Channels will persist data locally if an endpoint cannot be reached. Please review below to see which frameworks and telemetry channels are affected.
+Yes. We use HTTPS to send data to the portal from nearly all SDKs, including web servers, devices, and HTTPS webpages.
-Telemetry channels that utilize local storage create temp files in the TEMP or APPDATA directories, which are restricted to the specific account running your application. This may happen when an endpoint was temporarily unavailable or you hit the throttling limit. Once this issue is resolved, the telemetry channel will resume sending all the new and persisted data.
+## Does the SDK create temporary local storage?
-This persisted data is not encrypted locally. If this is a concern, review the data and restrict the collection of private data. (For more information, see [How to export and delete private data](../logs/personal-data-mgmt.md#exporting-and-deleting-personal-data).)
+Yes. Certain telemetry channels will persist data locally if an endpoint can't be reached. The following paragraphs describe which frameworks and telemetry channels are affected:
-If a customer needs to configure this directory with specific security requirements, it can be configured per framework. Please make sure that the process running your application has write access to this directory, but also make sure this directory is protected to avoid telemetry being read by unintended users.
+- Telemetry channels that utilize local storage create temp files in the TEMP or APPDATA directories, which are restricted to the specific account running your application. This situation might happen when an endpoint was temporarily unavailable or if you hit the throttling limit. After this issue is resolved, the telemetry channel will resume sending all the new and persisted data.
+- This persisted data isn't encrypted locally. If this issue is a concern, review the data and restrict the collection of private data. For more information, see [Export and delete private data](../logs/personal-data-mgmt.md#exporting-and-deleting-personal-data).
+- If a customer needs to configure this directory with specific security requirements, it can be configured per framework. Make sure that the process running your application has write access to this directory. Also make sure this directory is protected to avoid telemetry being read by unintended users.
### Java
-`C:\Users\username\AppData\Local\Temp` is used for persisting data. This location isn't configurable from the config directory and the permissions to access this folder are restricted to the specific user with required credentials. (For more information, see [implementation](https://github.com/Microsoft/ApplicationInsights-Java/blob/40809cb6857231e572309a5901e1227305c27c1a/core/src/main/java/com/microsoft/applicationinsights/internal/util/LocalFileSystemUtils.java#L48-L72).)
-
-### .NET
+The folder `C:\Users\username\AppData\Local\Temp` is used for persisting data. This location isn't configurable from the config directory, and the permissions to access this folder are restricted to the specific user with required credentials. For more information, see [implementation](https://github.com/Microsoft/ApplicationInsights-Java/blob/40809cb6857231e572309a5901e1227305c27c1a/core/src/main/java/com/microsoft/applicationinsights/internal/util/LocalFileSystemUtils.java#L48-L72).
-By default `ServerTelemetryChannel` uses the current userΓÇÖs local app data folder `%localAppData%\Microsoft\ApplicationInsights` or temp folder `%TMP%`. (See [implementation](https://github.com/Microsoft/ApplicationInsights-dotnet/blob/91e9c91fcea979b1eec4e31ba8e0fc683bf86802/src/ServerTelemetryChannel/Implementation/ApplicationFolderProvider.cs#L54-L84) here.)
+### .NET
+By default, `ServerTelemetryChannel` uses the current user's local app data folder `%localAppData%\Microsoft\ApplicationInsights` or temp folder `%TMP%`. For more information, see [implementation](https://github.com/Microsoft/ApplicationInsights-dotnet/blob/91e9c91fcea979b1eec4e31ba8e0fc683bf86802/src/ServerTelemetryChannel/Implementation/ApplicationFolderProvider.cs#L54-L84).
Via configuration file: ```xml
Via configuration file:
Via code: -- Remove ServerTelemetryChannel from configuration file
+- Remove `ServerTelemetryChannel` from the configuration file.
- Add this snippet to your configuration:+ ```csharp ServerTelemetryChannel channel = new ServerTelemetryChannel(); channel.StorageFolder = @"D:\NewTestFolder";
Via code:
### NetCore
-By default `ServerTelemetryChannel` uses the current userΓÇÖs local app data folder `%localAppData%\Microsoft\ApplicationInsights` or temp folder `%TMP%`. (See [implementation](https://github.com/Microsoft/ApplicationInsights-dotnet/blob/91e9c91fcea979b1eec4e31ba8e0fc683bf86802/src/ServerTelemetryChannel/Implementation/ApplicationFolderProvider.cs#L54-L84) here.)
+By default, `ServerTelemetryChannel` uses the current user's local app data folder `%localAppData%\Microsoft\ApplicationInsights` or temp folder `%TMP%`. For more information, see [implementation](https://github.com/Microsoft/ApplicationInsights-dotnet/blob/91e9c91fcea979b1eec4e31ba8e0fc683bf86802/src/ServerTelemetryChannel/Implementation/ApplicationFolderProvider.cs#L54-L84).
In a Linux environment, local storage will be disabled unless a storage folder is specified. > [!NOTE]
-> With the release 2.15.0-beta3 and greater local storage is now automatically created for Linux, Mac, and Windows. For non Windows systems the SDK will automatically create a local storage folder based on the following logic:
-> - `${TMPDIR}` - if `${TMPDIR}` environment variable is set this location is used.
-> - `/var/tmp` - if the previous location does not exist we try `/var/tmp`.
-> - `/tmp` - if both the previous locations do not exist we try `tmp`.
-> - If none of those locations exist local storage is not created and manual configuration is still required. [For full implementation details](https://github.com/microsoft/ApplicationInsights-dotnet/pull/1860).
+> With the release 2.15.0-beta3 and greater, local storage is now automatically created for Linux, Mac, and Windows. For non-Windows systems, the SDK will automatically create a local storage folder based on the following logic:
+>
+> - `${TMPDIR}`: If `${TMPDIR}` environment variable is set, this location is used.
+> - `/var/tmp`: If the previous location doesn't exist, we try `/var/tmp`.
+> - `/tmp`: If both the previous locations don't exist, we try `tmp`.
+> - If none of those locations exist, local storage isn't created and manual configuration is still required.
+>
+> For full implementation details, see [ServerTelemetryChannel stores telemetry data in default folder during transient errors in non-Windows environments](https://github.com/microsoft/ApplicationInsights-dotnet/pull/1860).
The following code snippet shows how to set `ServerTelemetryChannel.StorageFolder` in the `ConfigureServices()` method of your `Startup.cs` class:
The following code snippet shows how to set `ServerTelemetryChannel.StorageFolde
services.AddSingleton(typeof(ITelemetryChannel), new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"}); ```
-(For more information, see [AspNetCore Custom Configuration](https://github.com/Microsoft/ApplicationInsights-aspnetcore/wiki/Custom-Configuration).)
+For more information, see [AspNetCore custom configuration](https://github.com/Microsoft/ApplicationInsights-aspnetcore/wiki/Custom-Configuration).
### Node.js
-By default `%TEMP%/appInsights-node{INSTRUMENTATION KEY}` is used for persisting data. Permissions to access this folder are restricted to the current user and Administrators. (See [implementation](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Sender.ts) here.)
+By default, `%TEMP%/appInsights-node{INSTRUMENTATION KEY}` is used for persisting data. Permissions to access this folder are restricted to the current user and administrators. For more information, see the [implementation](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Sender.ts).
The folder prefix `appInsights-node` can be overridden by changing the runtime value of the static variable `Sender.TEMPDIR_PREFIX` found in [Sender.ts](https://github.com/Microsoft/ApplicationInsights-node.js/blob/7a1ecb91da5ea0febf5ceab13d6a4bf01a63933d/Library/Sender.ts#L384). ### JavaScript (browser)
-[HTML5 Session Storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/sessionStorage) is used to persist data. Two separate buffers are used: `AI_buffer` and `AI_sent_buffer`. Telemetry that is batched and waiting to be sent is stored in `AI_buffer`. Telemetry that was just sent is placed in `AI_sent_buffer` until the ingestion server responds that it was successfully received. When telemetry is successfully received, it's removed from all buffers. On transient failures (for example, a user loses network connectivity), telemetry remains in `AI_buffer` until it is successfully received or the ingestion server responds that the telemetry is invalid (bad schema or too old, for example).
+[HTML5 Session Storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/sessionStorage) is used to persist data. Two separate buffers are used: `AI_buffer` and `AI_sent_buffer`. Telemetry that's batched and waiting to be sent is stored in `AI_buffer`. Telemetry that was just sent is placed in `AI_sent_buffer` until the ingestion server responds that it was successfully received.
+
+When telemetry is successfully received, it's removed from all buffers. On transient failures (for example, a user loses network connectivity), telemetry remains in `AI_buffer` until it's successfully received or the ingestion server responds that the telemetry is invalid (bad schema or too old, for example).
Telemetry buffers can be disabled by setting [`enableSessionStorageBuffer`](https://github.com/microsoft/ApplicationInsights-JS/blob/17ef50442f73fd02a758fbd74134933d92607ecf/legacy/JavaScript/JavaScriptSDK.Interfaces/IConfig.ts#L31) to `false`. When session storage is turned off, a local array is instead used as persistent storage. Because the JavaScript SDK runs on a client device, the user has access to this storage location via their browser's developer tools. ### OpenCensus Python
-By default OpenCensus Python SDK uses the current user folder `%username%/.opencensus/.azure/`. Permissions to access this folder are restricted to the current user and Administrators. (See [implementation](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/opencensus/ext/azure/common/storage.py) here.) The folder with your persisted data will be named after the Python file that generated the telemetry.
+By default, OpenCensus Python SDK uses the current user folder `%username%/.opencensus/.azure/`. Permissions to access this folder are restricted to the current user and administrators. For more information, see the [implementation](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/opencensus/ext/azure/common/storage.py). The folder with your persisted data will be named after the Python file that generated the telemetry.
-You may change the location of your storage file by passing in the `storage_path` parameter in the constructor of the exporter you are using.
+You can change the location of your storage file by passing in the `storage_path` parameter in the constructor of the exporter you're using.
```python AzureLogHandler(
AzureLogHandler(
## How do I send data to Application Insights using TLS 1.2?
-To ensure the security of data in transit to the Application Insights endpoints, we strongly encourage customers to configure their application to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols.
+To ensure the security of data in transit to the Application Insights endpoints, we strongly encourage customers to configure their application to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable. Although they still currently work to allow backward compatibility, they *aren't recommended*. The industry is quickly moving to abandon support for these older protocols.
-The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a [deadline of June 30, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your application/clients cannot communicate over at least TLS 1.2 you would not be able to send data to Application Insights. The approach you take to test and validate your application's TLS support will vary depending on the operating system/platform as well as the language/framework your application uses.
+The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a [deadline of June 30, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. After Azure drops legacy support, if your application or clients can't communicate over at least TLS 1.2, you wouldn't be able to send data to Application Insights. The approach you take to test and validate your application's TLS support will vary depending on the operating system or platform and the language or framework your application uses.
-We do not recommend explicitly setting your application to only use TLS 1.2 unless necessary as this can break platform level security features that allow you to automatically detect and take advantage of newer more secure protocols as they become available such as TLS 1.3. We recommend performing a thorough audit of your application's code to check for hardcoding of specific TLS/SSL versions.
+We do not recommend explicitly setting your application to only use TLS 1.2, unless necessary. This setting can break platform-level security features that allow you to automatically detect and take advantage of newer more secure protocols as they become available, such as TLS 1.3. We recommend that you perform a thorough audit of your application's code to check for hardcoding of specific TLS/SSL versions.
-### Platform/Language specific guidance
+### Platform/Language-specific guidance
-|Platform/Language | Support | More Information |
+|Platform/Language | Support | More information |
| | | |
-| Azure App Services | Supported, configuration may be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). |
-| Azure Function Apps | Supported, configuration may be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). |
-|.NET | Supported, Long Term Support (LTS) | For detailed configuration information, refer to [these instructions](/dotnet/framework/network-programming/tls). |
-|Status Monitor | Supported, configuration required | Status Monitor relies on [OS Configuration](/windows-server/security/tls/tls-registry-settings) + [.NET Configuration](/dotnet/framework/network-programming/tls#support-for-tls-12) to support TLS 1.2.
-|Node.js | Supported, in v10.5.0, configuration may be required. | Use the [official Node.js TLS/SSL documentation](https://nodejs.org/api/tls.html) for any application-specific configuration. |
+| Azure App Services | Supported, configuration might be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). |
+| Azure Function Apps | Supported, configuration might be required. | Support was announced in April 2018. Read the announcement for [configuration details](https://azure.github.io/AppService/2018/04/17/App-Service-and-Functions-hosted-apps-can-now-update-TLS-versions!). |
+|.NET | Supported, Long Term Support (LTS). | For detailed configuration information, refer to [these instructions](/dotnet/framework/network-programming/tls). |
+|Status Monitor | Supported, configuration required. | Status Monitor relies on [OS Configuration](/windows-server/security/tls/tls-registry-settings) + [.NET Configuration](/dotnet/framework/network-programming/tls#support-for-tls-12) to support TLS 1.2.
+|Node.js | Supported, in v10.5.0, configuration might be required. | Use the [official Node.js TLS/SSL documentation](https://nodejs.org/api/tls.html) for any application-specific configuration. |
|Java | Supported, JDK support for TLS 1.2 was added in [JDK 6 update 121](https://www.oracle.com/technetwork/java/javase/overview-156328.html#R160_121) and [JDK 7](https://www.oracle.com/technetwork/java/javase/7u131-relnotes-3338543.html). | JDK 8 uses [TLS 1.2 by default](https://blogs.oracle.com/java-platform-group/jdk-8-will-use-tls-12-as-default). | |Linux | Linux distributions tend to rely on [OpenSSL](https://www.openssl.org) for TLS 1.2 support. | Check the [OpenSSL Changelog](https://www.openssl.org/news/changelog.html) to confirm your version of OpenSSL is supported.|
-| Windows 8.0 - 10 | Supported, and enabled by default. | To confirm that you are still using the [default settings](/windows-server/security/tls/tls-registry-settings). |
-| Windows Server 2012 - 2016 | Supported, and enabled by default. | To confirm that you are still using the [default settings](/windows-server/security/tls/tls-registry-settings) |
+| Windows 8.0 - 10 | Supported, and enabled by default. | To confirm that you're still using the [default settings](/windows-server/security/tls/tls-registry-settings). |
+| Windows Server 2012 - 2016 | Supported, and enabled by default. | To confirm that you're still using the [default settings](/windows-server/security/tls/tls-registry-settings). |
| Windows 7 SP1 and Windows Server 2008 R2 SP1 | Supported, but not enabled by default. | See the [Transport Layer Security (TLS) registry settings](/windows-server/security/tls/tls-registry-settings) page for details on how to enable. | | Windows Server 2008 SP2 | Support for TLS 1.2 requires an update. | See [Update to add support for TLS 1.2](https://support.microsoft.com/help/4019276/update-to-add-support-for-tls-1-1-and-tls-1-2-in-windows-server-2008-s) in Windows Server 2008 SP2. |
-|Windows Vista | Not Supported. | N/A
+|Windows Vista | Not supported. | N/A
### Check what version of OpenSSL your Linux distribution is running
openssl version -a
### Run a test TLS 1.2 transaction on Linux
-To run a preliminary test to see if your Linux system can communicate over TLS 1.2., open the terminal and run:
+To run a preliminary test to see if your Linux system can communicate over TLS 1.2, open the terminal and run:
```terminal openssl s_client -connect bing.com:443 -tls1_2
openssl s_client -connect bing.com:443 -tls1_2
## Personal data stored in Application Insights
-Our [Application Insights personal data article](../logs/personal-data-mgmt.md) discusses this issue in-depth.
+For an in-depth discussion on this issue, see [Managing personal data in Log Analytics and Application Insights](../logs/personal-data-mgmt.md).
#### Can my users turn off Application Insights?+ Not directly. We don't provide a switch that your users can operate to turn off Application Insights.
-However, you can implement such a feature in your application. All the SDKs include an API setting that turns off telemetry collection.
+You can implement such a feature in your application. All the SDKs include an API setting that turns off telemetry collection.
## Data sent by Application Insights
-The SDKs vary between platforms, and there are several components that you can install. (Refer to [Application Insights - overview][start].) Each component sends different data.
+
+The SDKs vary between platforms, and there are several components that you can install. For more information, see [Application Insights overview][start]. Each component sends different data.
#### Classes of data sent in different scenarios
The SDKs vary between platforms, and there are several components that you can i
| [Add Application Insights SDK to a .NET web project][greenbrown] |ServerContext<br/>Inferred<br/>Perf counters<br/>Requests<br/>**Exceptions**<br/>Session<br/>users | | [Install Status Monitor on IIS][redfield] |Dependencies<br/>ServerContext<br/>Inferred<br/>Perf counters | | [Add Application Insights SDK to a Java web app][java] |ServerContext<br/>Inferred<br/>Request<br/>Session<br/>users |
-| [Add JavaScript SDK to web page][client] |ClientContext <br/>Inferred<br/>Page<br/>ClientPerf<br/>Ajax |
+| [Add JavaScript SDK to webpage][client] |ClientContext <br/>Inferred<br/>Page<br/>ClientPerf<br/>Ajax |
| [Define default properties][apiproperties] |**Properties** on all standard and custom events | | [Call TrackMetric][api] |Numeric values<br/>**Properties** | | [Call Track*][api] |Event name<br/>**Properties** | | [Call TrackException][api] |**Exceptions**<br/>Stack dump<br/>**Properties** |
-| SDK can't collect data. For example: <br/> - can't access perf counters<br/> - exception in telemetry initializer |SDK diagnostics |
+| SDK can't collect data. For example: <br/> - Can't access perf counters<br/> - Exception in telemetry initializer |SDK diagnostics |
For [SDKs for other platforms][platforms], see their documents.
For [SDKs for other platforms][platforms], see their documents.
| ClientContext |OS, locale, language, network, window resolution | | Session |`session id` | | ServerContext |Machine name, locale, OS, device, user session, user context, operation |
-| Inferred |geo location from IP address, timestamp, OS, browser |
+| Inferred |Geolocation from IP address, timestamp, OS, browser |
| Metrics |Metric name and value | | Events |Event name and value | | PageViews |URL and page name or screen name | | Client perf |URL/page name, browser load time |
-| Ajax |HTTP calls from web page to server |
+| Ajax |HTTP calls from webpage to server |
| Requests |URL, duration, response code |
-| Dependencies |Type(SQL, HTTP, ...), connection string, or URI, sync/async, duration, success, SQL statement (with Status Monitor) |
-| **Exceptions** |Type, **message**, call stacks, source file, line number, `thread id` |
-| Crashes |`Process id`, `parent process id`, `crash thread id`; application patch, `id`, build; exception type, address, reason; obfuscated symbols and registers, binary start and end addresses, binary name and path, cpu type |
-| Trace |**Message** and severity level |
+| Dependencies |Type (SQL, HTTP, ...), connection string, or URI, sync/async, duration, success, SQL statement (with Status Monitor) |
+| Exceptions |Type, message, call stacks, source file, line number, `thread id` |
+| Crashes |`Process id`, `parent process id`, `crash thread id`; application patch, `id`, build; exception type, address, reason; obfuscated symbols and registers, binary start and end addresses, binary name and path, cpu type |
+| Trace |Message and severity level |
| Perf counters |Processor time, available memory, request rate, exception rate, process private bytes, IO rate, request duration, request queue length | | Availability |Web test response code, duration of each test step, test name, timestamp, success, response time, test location |
-| SDK diagnostics |Trace message or Exception |
+| SDK diagnostics |Trace message or exception |
-You can [switch off some of the data by editing ApplicationInsights.config][config]
+You can [switch off some of the data by editing ApplicationInsights.config][config].
> [!NOTE]
-> Client IP is used to infer geographic location, but by default IP data is no longer stored and all zeroes are written to the associated field. To understand more about personal data handling we recommend this [article](../logs/personal-data-mgmt.md#application-data). If you need to store IP address data our [IP address collection article](./ip-collection.md) will walk you through your options.
+> Client IP is used to infer geographic location, but by default IP data is no longer stored and all zeroes are written to the associated field. To understand more about personal data handling, see [Managing personal data in Log Analytics and Application Insights](../logs/personal-data-mgmt.md#application-data). If you need to store IP address data, [geolocation and IP address handling](./ip-collection.md) will walk you through your options.
## Can I modify or update data after it has been collected?
-No, data is read-only, and can only be deleted via the purge functionality. To learn more visit [Guidance for personal data stored in Log Analytics and Application Insights](../logs/personal-data-mgmt.md#delete).
+No. Data is read-only and can only be deleted via the purge functionality. To learn more, see [Guidance for personal data stored in Log Analytics and Application Insights](../logs/personal-data-mgmt.md#delete).
## Credits
-This product includes GeoLite2 data created by MaxMind, available from [https://www.maxmind.com](https://www.maxmind.com).
-
+This product includes GeoLite2 data created by [MaxMind](https://www.maxmind.com).
<!--Link references-->
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Title: Migrate from Application Insights instrumentation keys to connection strings
-description: Learn the steps required to upgrade from Azure Monitor Application Insights instrumentation keys to connection strings
+description: Learn the steps required to upgrade from Azure Monitor Application Insights instrumentation keys to connection strings.
Last updated 02/14/2022
# Migrate from Application Insights instrumentation keys to connection strings
-This guide walks through migrating from [instrumentation keys](separate-resources.md#about-resources-and-instrumentation-keys) to [connection strings](sdk-connection-string.md#overview).
+This article walks you through migrating from [instrumentation keys](separate-resources.md#about-resources-and-instrumentation-keys) to [connection strings](sdk-connection-string.md#overview).
## Prerequisites - A [supported SDK version](#supported-sdk-versions)-- An existing [application insights resource](create-workspace-resource.md)
+- An existing [Application Insights resource](create-workspace-resource.md)
## Migration
-1. Go to the Overview blade of your Application Insights resource.
+1. Go to the **Overview** pane of your Application Insights resource.
-1. Find your connection string displayed on the right.
+1. Find your **Connection String** displayed on the right.
-1. Hover over the connection string and select the ΓÇ£Copy to clipboardΓÇ¥ icon.
+1. Hover over the connection string and select the **Copy to clipboard** icon.
1. Configure the Application Insights SDK by following [How to set connection strings](sdk-connection-string.md#set-a-connection-string).
This guide walks through migrating from [instrumentation keys](separate-resource
Use environment variables to pass a connection string to the Application Insights SDK or agent.
-To set a connection string via environment variable, place the value of the connection string into an environment variable named ΓÇ£APPLICATIONINSIGHTS_CONNECTION_STRINGΓÇ¥.
+To set a connection string via an environment variable, place the value of the connection string into an environment variable named `APPLICATIONINSIGHTS_CONNECTION_STRING`.
-This process can be [automated in your Azure deployments](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-with-arm-templates-and-azure-portal). For example, the following ARM template shows how you can automatically include the correct connection string with an App Services deployment (be sure to include any other App Settings your app requires):
+This process can be [automated in your Azure deployments](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-with-arm-templates-and-azure-portal). For example, the following Azure Resource Manager template shows how you can automatically include the correct connection string with an Azure App Service deployment. Be sure to include any other app settings your app requires:
```JSON {
This process can be [automated in your Azure deployments](../../azure-resource-m
} ```
-## New capabilities
-
-Connection strings provide a single configuration setting and eliminate the need for multiple proxy settings.
-- **Reliability:** Connection strings make telemetry ingestion more reliable by removing dependencies on global ingestion endpoints.--- **Security:** Connection strings allow authenticated telemetry ingestion by using [Azure AD authentication for Application Insights](azure-ad-authentication.md).
+## New capabilities
-- **Customized endpoints (sovereign or hybrid cloud environments):** Endpoint settings allow sending data to a specific [Azure Government region](custom-endpoints.md#regions-that-require-endpoint-modification). ([see examples](sdk-connection-string.md#set-a-connection-string))
+Connection strings provide a single configuration setting and eliminate the need for multiple proxy settings.
-- **Privacy (regional endpoints)** ΓÇô Connection strings ease privacy concerns by sending data to regional endpoints, ensuring data doesn't leave a geographic region.
+- **Reliability**: Connection strings make telemetry ingestion more reliable by removing dependencies on global ingestion endpoints.
+- **Security**: Connection strings allow authenticated telemetry ingestion by using [Azure Active Directory (Azure AD) authentication for Application Insights](azure-ad-authentication.md).
+- **Customized endpoints (sovereign or hybrid cloud environments)**: Endpoint settings allow sending data to a specific [Azure Government region](custom-endpoints.md#regions-that-require-endpoint-modification). ([See examples](sdk-connection-string.md#set-a-connection-string).)
+- **Privacy (regional endpoints)**: Connection strings ease privacy concerns by sending data to regional endpoints, ensuring data doesn't leave a geographic region.
-## Supported SDK Versions
+## Supported SDK versions
- .NET and .NET Core v2.12.0+ - Java v2.5.1 and Java 3.0+ - JavaScript v2.3.0+ - NodeJS v1.5.0+ - Python v1.0.0++ ## Troubleshooting+
+This section provides troubleshooting solutions.
+ ### Alert: "Transition to using connection strings for data ingestion" Follow the [migration steps](#migration) in this article to resolve this alert.+ ### Missing data - Confirm you're using a [supported SDK version](#supported-sdk-versions). If you use Application Insights integration in another Azure product offering, check its documentation on how to properly configure a connection string.- - Confirm you aren't setting both an instrumentation key and connection string at the same time. Instrumentation key settings should be removed from your configuration.- - Confirm your connection string is exactly as provided in the Azure portal. ### Environment variables aren't working
- If you hardcode an instrumentation key in your application code, that programming may take precedence before environment variables.
+ If you hardcode an instrumentation key in your application code, that programming might take precedence before environment variables.
## FAQ
+This section provides answers to common questions.
+ ### Where else can I find my connection string?
-The connection string is also included in the ARM resource properties for your Application Insights resource, under the field name ΓÇ£ConnectionStringΓÇ¥.
-### How does this affect auto instrumentation?
-Auto instrumentation scenarios aren't impacted.
+The connection string is also included in the Resource Manager resource properties for your Application Insights resource, under the field name `ConnectionString`.
+
+### How does this affect auto-instrumentation?
+
+Auto-instrumentation scenarios aren't affected.
-### Can I use Azure AD authentication with auto instrumentation?
+### Can I use Azure AD authentication with auto-instrumentation?
-You can't enable [Azure AD authentication](azure-ad-authentication.md) for [auto instrumentation](codeless-overview.md) scenarios. We have plans to address this limitation in the future.
+You can't enable [Azure AD authentication](azure-ad-authentication.md) for [auto-instrumentation](codeless-overview.md) scenarios. We have plans to address this limitation in the future.
-### What is the difference between global and regional ingestion?
+### What's the difference between global and regional ingestion?
-Global ingestion sends all telemetry data to a single endpoint, no matter where this data will be stored. Regional ingestion allows you to define specific endpoints per region for data ingestion, ensuring data stays within a specific region during processing and storage.
+Global ingestion sends all telemetry data to a single endpoint, no matter where this data will be stored. Regional ingestion allows you to define specific endpoints per region for data ingestion. This capability ensures data stays within a specific region during processing and storage.
-### How do connection strings impact the billing?
+### How do connection strings affect the billing?
-Billing isn't impacted.
+Billing isn't affected.
### Microsoft Q&A
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Title: Monitor Node.js services with Azure Application Insights | Microsoft Docs
+ Title: Monitor Node.js services with Application Insights | Microsoft Docs
description: Monitor performance and diagnose problems in Node.js services with Application Insights. Last updated 10/12/2021
[Application Insights](./app-insights-overview.md) monitors your components after deployment to discover performance and other issues. You can use Application Insights for Node.js services that are hosted in your datacenter, Azure VMs and web apps, and even in other public clouds.
-To receive, store, and explore your monitoring data, include the SDK in your code, and then set up a corresponding Application Insights resource in Azure. The SDK sends data to that resource for further analysis and exploration.
+To receive, store, and explore your monitoring data, include the SDK in your code. Then set up a corresponding Application Insights resource in Azure. The SDK sends data to that resource for further analysis and exploration.
The Node.js client library can automatically monitor incoming and outgoing HTTP requests, exceptions, and some system metrics. Beginning in version 0.20, the client library also can monitor some common [third-party packages](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers#currently-supported-modules), like MongoDB, MySQL, and Redis. All events related to an incoming HTTP request are correlated for faster troubleshooting.
Before you begin, make sure that you have an Azure subscription, or [get a new o
### <a name="resource"></a> Set up an Application Insights resource 1. Sign in to the [Azure portal][portal].
-2. [Create an Application Insights resource](create-new-resource.md)
+1. Create an [Application Insights resource](create-new-resource.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ### <a name="sdk"></a> Set up the Node.js client library
-Include the SDK in your app, so it can gather data.
+Include the SDK in your app so that it can gather data.
1. Copy your resource's connection string from your new resource. Application Insights uses the connection string to map data to your Azure resource. Before the SDK can use your connection string, you must specify the connection string in an environment variable or in your code.
- :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot displaying Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
+ :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows the Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
-2. Add the Node.js client library to your app's dependencies via package.json. From the root folder of your app, run:
+1. Add the Node.js client library to your app's dependencies via `package.json`. From the root folder of your app, run:
```bash npm install applicationinsights --save ``` > [!NOTE]
- > If you are using TypeScript, do not install separate "typings" packages. This NPM package contains built-in typings.
+ > If you're using TypeScript, don't install separate "typings" packages. This NPM package contains built-in typings.
-3. Explicitly load the library in your code. Because the SDK injects instrumentation into many other libraries, load the library as early as possible, even before other `require` statements.
+1. Explicitly load the library in your code. Because the SDK injects instrumentation into many other libraries, load the library as early as possible, even before other `require` statements.
```javascript let appInsights = require('applicationinsights'); ```
-4. You also can provide a connection string via the environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`, instead of passing it manually to `setup()` or `new appInsights.TelemetryClient()`. This practice lets you keep connection strings out of committed source code, and you can specify different connection strings for different environments. To manually configure, call `appInsights.setup('[your connection string]');`.
+1. You also can provide a connection string via the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`, instead of passing it manually to `setup()` or `new appInsights.TelemetryClient()`. This practice lets you keep connection strings out of committed source code, and you can specify different connection strings for different environments. To manually configure, call `appInsights.setup('[your connection string]');`.
For more configuration options, see the following sections. You can try the SDK without sending telemetry by setting `appInsights.defaultClient.config.disableAppInsights = true`.
-5. Start automatically collecting and sending data by calling `appInsights.start();`.
+1. Start automatically collecting and sending data by calling `appInsights.start();`.
> [!NOTE]
-> As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You have the option to disable non-essential data collection. [Learn More](./statsbeat.md).
+> As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You have the option to disable non-essential data collection. [Learn more](./statsbeat.md).
### <a name="monitor"></a> Monitor your app
The SDK automatically gathers telemetry about the Node.js runtime and some commo
Then, in the [Azure portal][portal] go to the Application Insights resource that you created earlier. In the **Overview timeline**, look for your first few data points. To see more detailed data, select different components in the charts.
-To view the topology that is discovered for your app, you can use [Application map](app-map.md).
+To view the topology that's discovered for your app, you can use [Application Map](app-map.md).
#### No data
-Because the SDK batches data for submission, there might be a delay before items are displayed in the portal. If you don't see data in your resource, try some of the following fixes:
+Because the SDK batches data for submission, there might be a delay before items appear in the portal. If you don't see data in your resource, try some of the following fixes:
* Continue to use the application. Take more actions to generate more telemetry. * Select **Refresh** in the portal resource view. Charts periodically refresh on their own, but manually refreshing forces them to refresh immediately.
Because the SDK batches data for submission, there might be a delay before items
* Use [Search](./diagnostic-search.md) to look for specific events. * Check the [FAQ][FAQ].
-## Basic Usage
+## Basic usage
For out-of-the-box collection of HTTP requests, popular third-party library events, unhandled exceptions, and system metrics:
appInsights.setup("[your connection string]").start();
> [!NOTE] > If the connection string is set in the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`, `.setup()` can be called with no arguments. This makes it easy to use different connection strings for different environments.
-Load the Application Insights library, `require("applicationinsights")`, as early as possible in your scripts before loading other packages. This step is needed so that the Application Insights library can prepare later packages for tracking. If you encounter conflicts with other libraries doing similar preparation, try loading the Application Insights library afterwards.
+Load the Application Insights library `require("applicationinsights")` as early as possible in your scripts before you load other packages. This step is needed so that the Application Insights library can prepare later packages for tracking. If you encounter conflicts with other libraries doing similar preparation, try loading the Application Insights library afterwards.
-Because of the way JavaScript handles callbacks, more work is necessary to track a request across external dependencies and later callbacks. By default this extra tracking is enabled; disable it by calling `setAutoDependencyCorrelation(false)` as described in the [configuration](#sdk-configuration) section below.
+Because of the way JavaScript handles callbacks, more work is necessary to track a request across external dependencies and later callbacks. By default, this extra tracking is enabled. Disable it by calling `setAutoDependencyCorrelation(false)` as described in the [SDK configuration](#sdk-configuration) section.
-## Migrating from versions prior to 0.22
+## Migrate from versions prior to 0.22
There are breaking changes between releases prior to version 0.22 and after. These changes are designed to bring consistency with other Application Insights SDKs and allow future extensibility.
-In general, you can migrate with the following:
+In general, you can migrate with the following actions:
- Replace references to `appInsights.client` with `appInsights.defaultClient`.-- Replace references to `appInsights.getClient()` with `new appInsights.TelemetryClient()`
+- Replace references to `appInsights.getClient()` with `new appInsights.TelemetryClient()`.
- Replace all arguments to client.track* methods with a single object containing named properties as arguments. See your IDE's built-in type hinting or [TelemetryTypes](https://github.com/Microsoft/ApplicationInsights-node.js/tree/develop/Declarations/Contracts/TelemetryTypes) for the excepted object for each type of telemetry.
-If you access SDK configuration functions without chaining them to `appInsights.setup()`, you can now find these functions at `appInsights.Configurations` (for example, `appInsights.Configuration.setAutoCollectDependencies(true)`). Review the changes to the default configuration in the next section.
+If you access SDK configuration functions without chaining them to `appInsights.setup()`, you can now find these functions at `appInsights.Configurations`. An example is `appInsights.Configuration.setAutoCollectDependencies(true)`. Review the changes to the default configuration in the next section.
## SDK configuration
appInsights.setup("<connection_string>")
To fully correlate events in a service, be sure to set `.setAutoDependencyCorrelation(true)`. With this option set, the SDK can track context across asynchronous callbacks in Node.js.
-Review their descriptions in your IDE's built-in type hinting, or [applicationinsights.ts](https://github.com/microsoft/ApplicationInsights-node.js/blob/develop/applicationinsights.ts) for detailed information and optional secondary arguments.
+Review their descriptions in your IDE's built-in type hinting or [applicationinsights.ts](https://github.com/microsoft/ApplicationInsights-node.js/blob/develop/applicationinsights.ts) for detailed information and optional secondary arguments.
> [!NOTE]
-> By default `setAutoCollectConsole` is configured to *exclude* calls to `console.log` (and other console methods). Only calls to supported third-party loggers (for example, winston and bunyan) will be collected. You can change this behavior to include calls to `console` methods by using `setAutoCollectConsole(true, true)`.
+> By default, `setAutoCollectConsole` is configured to *exclude* calls to `console.log` and other console methods. Only calls to supported third-party loggers (for example, winston and bunyan) will be collected. You can change this behavior to include calls to `console` methods by using `setAutoCollectConsole(true, true)`.
### Sampling
-By default, the SDK will send all collected data to the Application Insights service. If you want to enable sampling to reduce the amount of data, set the `samplingPercentage` field on the `config` object of a client. Setting `samplingPercentage` to 100(the default) means all data will be sent and 0 means nothing will be sent.
+By default, the SDK will send all collected data to the Application Insights service. If you want to enable sampling to reduce the amount of data, set the `samplingPercentage` field on the `config` object of a client. Setting `samplingPercentage` to 100 (the default) means all data will be sent, and 0 means nothing will be sent.
If you're using automatic correlation, all data associated with a single request will be included or excluded as a unit.
appInsights.defaultClient.config.samplingPercentage = 33; // 33% of all telemetr
appInsights.start(); ```
-### Multiple roles for multi-components applications
+### Multiple roles for multi-component applications
-If your application consists of multiple components that you wish to instrument all with the same connection string and still see these components as separate units in the portal, as if they were using separate connection strings (for example, as separate nodes on the Application Map), you may need to manually configure the RoleName field to distinguish one component's telemetry from other components sending data to your Application Insights resource.
+In some scenarios, your application might consist of multiple components that you want to instrument all with the same connection string. You want to still see these components as separate units in the portal, as if they were using separate connection strings. An example is separate nodes on Application Map. You need to manually configure the `RoleName` field to distinguish one component's telemetry from other components that send data to your Application Insights resource.
-Use the following to set the RoleName field:
+Use the following code to set the `RoleName` field:
```javascript const appInsights = require("applicationinsights");
appInsights.defaultClient.context.tags[appInsights.defaultClient.context.keys.cl
appInsights.start(); ```
-### Automatic web snippet injection (Preview)
+### Automatic web snippet injection (preview)
-Automatic web snippet injection allows you to enable [Application Insights Usage Experiences](usage-overview.md) and Browser Diagnostic Experiences with a simple configuration. It provides an easier alternative to manually adding the JavaScript snippet or NPM package to your JavaScript web code. For node server with configuration, set `enableAutoWebSnippetInjection` to `true` or alternatively set environment variable `APPLICATIONINSIGHTS_WEB_SNIPPET_ENABLED = true`. Automatic web snippet injection is available in Application Insights Node.js SDK version 2.3.0 or greater. See [Application Insights Node.js GitHub Readme](https://github.com/microsoft/ApplicationInsights-node.js#automatic-web-snippet-injectionpreview) for more information.
+You can use automatic web snippet injection to enable [Application Insights usage experiences](usage-overview.md) and browser diagnostic experiences with a simple configuration. It's an easier alternative to manually adding the JavaScript snippet or npm package to your JavaScript web code.
+
+For node server with configuration, set `enableAutoWebSnippetInjection` to `true`. Alternatively, set the environment variable as `APPLICATIONINSIGHTS_WEB_SNIPPET_ENABLED = true`. Automatic web snippet injection is available in Application Insights Node.js SDK version 2.3.0 or greater. For more information, see [Application Insights Node.js GitHub Readme](https://github.com/microsoft/ApplicationInsights-node.js#automatic-web-snippet-injectionpreview).
### Automatic third-party instrumentation
-In order to track context across asynchronous calls, some changes are required in third party libraries such as MongoDB and Redis. By default, Application Insights will use [`diagnostic-channel-publishers`](https://github.com/Microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers) to monkey-patch some of these libraries. This feature can be disabled by setting the `APPLICATION_INSIGHTS_NO_DIAGNOSTIC_CHANNEL` environment variable.
+To track context across asynchronous calls, some changes are required in third-party libraries, such as MongoDB and Redis. By default, Application Insights will use [`diagnostic-channel-publishers`](https://github.com/Microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers) to monkey-patch some of these libraries. This feature can be disabled by setting the `APPLICATION_INSIGHTS_NO_DIAGNOSTIC_CHANNEL` environment variable.
> [!NOTE]
-> By setting that environment variable, events may no longer be correctly associated with the right operation.
+> By setting that environment variable, events might not be correctly associated with the right operation.
- Individual monkey-patches can be disabled by setting the `APPLICATION_INSIGHTS_NO_PATCH_MODULES` environment variable to a comma separated list of packages to disable (for example, `APPLICATION_INSIGHTS_NO_PATCH_MODULES=console,redis`) to avoid patching the `console` and `redis` packages.
+ Individual monkey patches can be disabled by setting the `APPLICATION_INSIGHTS_NO_PATCH_MODULES` environment variable to a comma-separated list of packages to disable. For example, use `APPLICATION_INSIGHTS_NO_PATCH_MODULES=console,redis` to avoid patching the `console` and `redis` packages.
-Currently there are nine packages that are instrumented: `bunyan`,`console`,`mongodb`,`mongodb-core`,`mysql`,`redis`,`winston`,`pg`, and `pg-pool`. Visit the [diagnostic-channel-publishers' README](https://github.com/Microsoft/node-diagnostic-channel/blob/master/src/diagnostic-channel-publishers/README.md) for information about exactly which version of these packages are patched.
+Currently, nine packages are instrumented: `bunyan`,`console`,`mongodb`,`mongodb-core`,`mysql`,`redis`,`winston`,`pg`, and `pg-pool`. For information about exactly which version of these packages are patched, see the [diagnostic-channel-publishers' README](https://github.com/Microsoft/node-diagnostic-channel/blob/master/src/diagnostic-channel-publishers/README.md).
-The `bunyan`, `winston`, and `console` patches will generate Application Insights trace events based on whether `setAutoCollectConsole` is enabled. The rest will generate Application Insights Dependency events based on whether `setAutoCollectDependencies` is enabled.
+The `bunyan`, `winston`, and `console` patches will generate Application Insights trace events based on whether `setAutoCollectConsole` is enabled. The rest will generate Application Insights dependency events based on whether `setAutoCollectDependencies` is enabled.
-### Live Metrics
+### Live metrics
-To enable sending Live Metrics from your app to Azure, use `setSendLiveMetrics(true)`. Filtering of live metrics in the portal is currently not supported.
+To enable sending live metrics from your app to Azure, use `setSendLiveMetrics(true)`. Currently, filtering of live metrics in the portal isn't supported.
### Extended metrics > [!NOTE]
-> The ability to send extended native metrics was added in version 1.4.0
+> The ability to send extended native metrics was added in version 1.4.0.
To enable sending extended native metrics from your app to Azure, install the separate native metrics package. The SDK will automatically load when it's installed and start collecting Node.js native metrics.
Currently, the native metrics package performs autocollection of garbage collect
- **Garbage collection**: The amount of CPU time spent on each type of garbage collection, and how many occurrences of each type. - **Event loop**: How many ticks occurred and how much CPU time was spent in total.-- **Heap vs non-heap**: How much of your app's memory usage is in the heap or non-heap.
+- **Heap vs. non-heap**: How much of your app's memory usage is in the heap or non-heap.
-### Distributed Tracing modes
+### Distributed tracing modes
-By default, the SDK will send headers understood by other applications/services instrumented with an Application Insights SDK. You can enable sending/receiving of [W3C Trace Context](https://github.com/w3c/trace-context) headers in addition to the existing AI headers, so you won't break correlation with any of your existing legacy services. Enabling W3C headers will allow your app to correlate with other services not instrumented with Application Insights, but do adopt this W3C standard.
+By default, the SDK will send headers understood by other applications or services instrumented with an Application Insights SDK. You can enable sending and receiving of [W3C Trace Context](https://github.com/w3c/trace-context) headers in addition to the existing AI headers. In this way, you won't break correlation with any of your existing legacy services. Enabling W3C headers will allow your app to correlate with other services not instrumented with Application Insights but that do adopt this W3C standard.
```Javascript const appInsights = require("applicationinsights");
appInsights.defaultClient.commonProperties = {
Use the following code to manually track HTTP GET requests: > [!NOTE]
-> All requests are tracked by default. To disable automatic collection, call .setAutoCollectRequests(false) before calling start().
+> All requests are tracked by default. To disable automatic collection, call `.setAutoCollectRequests(false)` before calling `start()`.
```javascript appInsights.defaultClient.trackRequest({name:"GET /customers", url:"http://myserver/customers", duration:309, resultCode:200, success:true}); ```
-Alternatively you can track requests using `trackNodeHttpRequest` method:
+Alternatively, you can track requests by using the `trackNodeHttpRequest` method:
```javascript var server = http.createServer((req, res) => {
server.on("listening", () => {
### Flush
-By default, telemetry is buffered for 15 seconds before it's sent to the ingestion server. If your application has a short lifespan, such as a CLI tool, it might be necessary to manually flush your buffered telemetry when application terminates, `appInsights.defaultClient.flush()`.
+By default, telemetry is buffered for 15 seconds before it's sent to the ingestion server. If your application has a short lifespan, such as a CLI tool, it might be necessary to manually flush your buffered telemetry when the application terminates by using `appInsights.defaultClient.flush()`.
-If the SDK detects that your application is crashing, it will call flush for you, `appInsights.defaultClient.flush({ isAppCrashing: true })`. With the flush option `isAppCrashing`, your application is assumed to be in an abnormal state, not suitable for sending telemetry. Instead, the SDK will save all buffered telemetry to [persistent storage](./data-retention-privacy.md#nodejs) and let your application terminate. When your application starts again, it will try to send any telemetry that was saved to persistent storage.
+If the SDK detects that your application is crashing, it will call flush for you by using `appInsights.defaultClient.flush({ isAppCrashing: true })`. With the flush option `isAppCrashing`, your application is assumed to be in an abnormal state and isn't suitable to send telemetry. Instead, the SDK will save all buffered telemetry to [persistent storage](./data-retention-privacy.md#nodejs) and let your application terminate. When your application starts again, it will try to send any telemetry that was saved to persistent storage.
### Preprocess data with telemetry processors
-You can process and filter collected data before it's sent for retention using *Telemetry Processors*. Telemetry processors are called one by one in the order they were added before the telemetry item is sent to the cloud.
+You can process and filter collected data before it's sent for retention by using *telemetry processors*. Telemetry processors are called one by one in the order they were added before the telemetry item is sent to the cloud.
```javascript public addTelemetryProcessor(telemetryProcessor: (envelope: Contracts.Envelope, context: { http.RequestOptions, http.ClientRequest, http.ClientResponse, correlationContext }) => boolean) ```
-If a telemetry processor returns false, that telemetry item won't be sent.
+If a telemetry processor returns `false`, that telemetry item won't be sent.
-All telemetry processors receive the telemetry data and its envelope to inspect and modify. They also receive a context object. The contents of this object is defined by the `contextObjects` parameter when calling a track method for manually tracked telemetry. For automatically collected telemetry, this object is filled with available request information and the persistent request content as provided by `appInsights.getCorrelationContext()` (if automatic dependency correlation is enabled).
+All telemetry processors receive the telemetry data and its envelope to inspect and modify. They also receive a context object. The contents of this object are defined by the `contextObjects` parameter when calling a track method for manually tracked telemetry. For automatically collected telemetry, this object is filled with available request information and the persistent request content as provided by `appInsights.getCorrelationContext()` (if automatic dependency correlation is enabled).
The TypeScript type for a telemetry processor is:
otherClient.trackEvent({name: "my custom event"});
## Advanced configuration options
-The client object contains a `config` property with many optional settings for advanced scenarios. These can be set as follows:
+The client object contains a `config` property with many optional settings for advanced scenarios. To set them, use:
```javascript client.config.PROPERTYNAME = VALUE;
These properties are client specific, so you can configure `appInsights.defaultC
| connectionString | An identifier for your Application Insights resource. | | endpointUrl | The ingestion endpoint to send telemetry payloads to. | | quickPulseHost | The Live Metrics Stream host to send live metrics telemetry to. |
-| proxyHttpUrl | A proxy server for SDK HTTP traffic (Optional, Default pulled from `http_proxy` environment variable). |
-| proxyHttpsUrl | A proxy server for SDK HTTPS traffic (Optional, Default pulled from `https_proxy` environment variable). |
-| httpAgent | An http.Agent to use for SDK HTTP traffic (Optional, Default undefined). |
-| httpsAgent | An https.Agent to use for SDK HTTPS traffic (Optional, Default undefined). |
-| maxBatchSize | The maximum number of telemetry items to include in a payload to the ingestion endpoint (Default `250`). |
-| maxBatchIntervalMs | The maximum amount of time to wait to for a payload to reach maxBatchSize (Default `15000`). |
-| disableAppInsights | A flag indicating if telemetry transmission is disabled (Default `false`). |
-| samplingPercentage | The percentage of telemetry items tracked that should be transmitted (Default `100`). |
-| correlationIdRetryIntervalMs | The time to wait before retrying to retrieve the ID for cross-component correlation (Default `30000`). |
-| correlationHeaderExcludedDomains| A list of domains to exclude from cross-component correlation header injection (Default See [Config.ts](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Config.ts)).|
+| proxyHttpUrl | A proxy server for SDK HTTP traffic. (Optional. Default is pulled from `http_proxy` environment variable.) |
+| proxyHttpsUrl | A proxy server for SDK HTTPS traffic. (Optional. Default is pulled from `https_proxy` environment variable.) |
+| httpAgent | An http.Agent to use for SDK HTTP traffic. (Optional. Default is undefined.) |
+| httpsAgent | An https.Agent to use for SDK HTTPS traffic. (Optional. Default is undefined.) |
+| maxBatchSize | The maximum number of telemetry items to include in a payload to the ingestion endpoint. (Default is `250`.) |
+| maxBatchIntervalMs | The maximum amount of time to wait for a payload to reach maxBatchSize. (Default is `15000`.) |
+| disableAppInsights | A flag indicating if telemetry transmission is disabled. (Default is `false`.) |
+| samplingPercentage | The percentage of telemetry items tracked that should be transmitted. (Default is `100`.) |
+| correlationIdRetryIntervalMs | The time to wait before retrying to retrieve the ID for cross-component correlation. (Default is `30000`.) |
+| correlationHeaderExcludedDomains| A list of domains to exclude from cross-component correlation header injection. (Default. See [Config.ts](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Config.ts).)|
## Next steps
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
Title: Azure Application Insights Transaction Diagnostics | Microsoft Docs
-description: Application Insights end-to-end transaction diagnostics
+ Title: Application Insights transaction diagnostics | Microsoft Docs
+description: This article explains Application Insights end-to-end transaction diagnostics.
Last updated 01/19/2018
The unified diagnostics experience automatically correlates server-side telemetry from across all your Application Insights monitored components into a single view. It doesn't matter if you have multiple resources. Application Insights detects the underlying relationship and allows you to easily diagnose the application component, dependency, or exception that caused a transaction slowdown or failure.
-## What is a Component?
+## What is a component?
-Components are independently deployable parts of your distributed/microservices application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.
+Components are independently deployable parts of your distributed or microservice application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.
-* Components are different from "observed" external dependencies such as SQL, Event Hubs etc. which your team/organization may not have access to (code or telemetry).
-* Components run on any number of server/role/container instances.
-* Components can be separate Application Insights instrumentation keys (even if subscriptions are different) or different roles reporting to a single Application Insights instrumentation key. The new experience shows details across all components, regardless of how they have been set up.
+* Components are different from "observed" external dependencies, such as SQL and event hubs, which your team or organization might not have access to (code or telemetry).
+* Components run on any number of server, role, or container instances.
+* Components can be separate Application Insights instrumentation keys, even if subscriptions are different. Components also can be different roles that report to a single Application Insights instrumentation key. The new experience shows details across all components, regardless of how they were set up.
> [!NOTE]
-> * **Missing the related item links?** All of the related telemetry are in the [top](#cross-component-transaction-chart) and [bottom](#all-telemetry-with-this-operation-id) sections of the left side.
+> Are you missing the related item links? All the related telemetry is on the left side in the [top](#cross-component-transaction-chart) and [bottom](#all-telemetry-with-this-operation-id) sections.
## Transaction diagnostics experience
-This view has four key parts: results list, a cross-component transaction chart, a time-sequence list of all telemetry related to this operation, and the details pane for any selected telemetry item on the left.
-![Key parts](media/transaction-diagnostics/4partsCrossComponent.png)
+This view has four key parts: a results list, a cross-component transaction chart, a time-sequence list of all telemetry related to this operation, and the details pane for any selected telemetry item on the left.
+
+![Screenshot that shows the four key parts of the view.](media/transaction-diagnostics/4partsCrossComponent.png)
## Cross-component transaction chart This chart provides a timeline with horizontal bars during requests and dependencies across components. Any exceptions that are collected are also marked on the timeline.
-* The top row on this chart represents the entry point, the incoming request to the first component called in this transaction. The duration is the total time taken for the transaction to complete.
-* Any calls to external dependencies are simple non-collapsible rows, with icons representing the dependency type.
+* The top row on this chart represents the entry point. It's the incoming request to the first component called in this transaction. The duration is the total time taken for the transaction to complete.
+* Any calls to external dependencies are simple noncollapsible rows, with icons that represent the dependency type.
* Calls to other components are collapsible rows. Each row corresponds to a specific operation invoked at the component.
-* By default, the request, dependency, or exception that you selected is displayed on the right side.
-* Select any row to see its [details on the right](#details-of-the-selected-telemetry).
+* By default, the request, dependency, or exception that you selected appears on the right side.
+* Select any row to see its [details on the right](#details-of-the-selected-telemetry).
> [!NOTE]
-> Calls to other components have two rows: one row represents the outbound call (dependency) from the caller component, and the other row corresponds to the inbound request at the called component. The leading icon and distinct styling of the duration bars help differentiate between them.
+> Calls to other components have two rows. One row represents the outbound call (dependency) from the caller component. The other row corresponds to the inbound request at the called component. The leading icon and distinct styling of the duration bars help differentiate between them.
## All telemetry with this Operation ID
-This section shows flat list view in a time sequence of all the telemetry related to this transaction. It also shows the custom events, and traces that aren't displayed in the transaction chart. You can filter this list to telemetry generated by a specific component/call. You can select any telemetry item in this list to see corresponding [details on the right](#details-of-the-selected-telemetry).
+This section shows a flat list view in a time sequence of all the telemetry related to this transaction. It also shows the custom events and traces that aren't displayed in the transaction chart. You can filter this list to telemetry generated by a specific component or call. You can select any telemetry item in this list to see corresponding [details on the right](#details-of-the-selected-telemetry).
-![Time sequence of all telemetry](media/transaction-diagnostics/allTelemetryDrawerOpened.png)
+![Screenshot that shows the time sequence of all telemetry.](media/transaction-diagnostics/allTelemetryDrawerOpened.png)
## Details of the selected telemetry
-This collapsible pane shows the detail of any selected item from the transaction chart, or the list. "Show all" lists all of the standard attributes that are collected. Any custom attributes are separately listed below the standard set. Select the "..." below the stack trace window to get an option to copy the trace. "Open profiler traces" or "Open debug snapshot" shows code level diagnostics in corresponding detail panes.
+This collapsible pane shows the detail of any selected item from the transaction chart or the list. **Show all** lists all the standard attributes that are collected. Any custom attributes are listed separately under the standard set. Select the ellipsis button (...) under the **Call Stack** trace window to get an option to copy the trace. **Open profiler traces** and **Open debug snapshot** show code-level diagnostics in corresponding detail panes.
-![Exception detail](media/transaction-diagnostics/exceptiondetail.png)
+![Screenshot that shows exception details.](media/transaction-diagnostics/exceptiondetail.png)
## Search results
-This collapsible pane shows the other results that meet the filter criteria. Select any result to update the respective details the three sections listed above. We try to find samples that are most likely to have the details available from all components even if sampling is in effect in any of them. These are shown as "suggested" samples.
+This collapsible pane shows the other results that meet the filter criteria. Select any result to update the respective details of the preceding three sections. We try to find samples that are most likely to have the details available from all components, even if sampling is in effect in any of them. These samples are shown as suggestions.
-![Search results](media/transaction-diagnostics/searchResults.png)
+![Screenshot that shows search results.](media/transaction-diagnostics/searchResults.png)
-## Profiler and snapshot debugger
+## Profiler and Snapshot Debugger
-[Application Insights profiler](./profiler.md) or [snapshot debugger](snapshot-debugger.md) help with code-level diagnostics of performance and failure issues. With this experience, you can see profiler traces or snapshots from any component with a single selection.
+[Application Insights Profiler](./profiler.md) or [Snapshot Debugger](snapshot-debugger.md) help with code-level diagnostics of performance and failure issues. With this experience, you can see Profiler traces or snapshots from any component with a single selection.
-If you couldn't get Profiler working, contact **serviceprofilerhelp\@microsoft.com**
+If you can't get Profiler working, contact serviceprofilerhelp\@microsoft.com.
-If you couldn't get Snapshot Debugger working, contact **snapshothelp\@microsoft.com**
+If you can't get Snapshot Debugger working, contact snapshothelp\@microsoft.com.
-![Profiler Integration](media/transaction-diagnostics/profilerTraces.png)
+![Screenshot that shows Profiler integration.](media/transaction-diagnostics/profilerTraces.png)
## FAQ
-*I see a single component on the chart, and the others are only showing as external dependencies without any detail of what happened within those components.*
+This section provides answers to common questions.
+
+### Why do I see a single component on the chart and the other components only show as external dependencies without any details?
Potential reasons: * Are the other components instrumented with Application Insights? * Are they using the latest stable Application Insights SDK?
-* If these components are separate Application Insights resources, do you have required [access](resources-roles-access-control.md)
-If you do have access and the components are instrumented with the latest Application Insights SDKs, let us know via the top right feedback channel.
+* If these components are separate Application Insights resources, do you have required [access](resources-roles-access-control.md)?
+If you do have access and the components are instrumented with the latest Application Insights SDKs, let us know via the feedback channel in the upper-right corner.
+
+### I see duplicate rows for the dependencies. Is this behavior expected?
-*I see duplicate rows for the dependencies. Is this expected?*
+Currently, we're showing the outbound dependency call separate from the inbound request. Typically, the two calls look identical with only the duration value being different because of the network round trip. The leading icon and distinct styling of the duration bars help differentiate between them. Is this presentation of the data confusing? Give us your feedback!
-At this time, we're showing the outbound dependency call separate from the inbound request. Typically, the two calls look identical with only the duration value being different due to the network round trip. The leading icon and distinct styling of the duration bars help differentiate between them. Is this presentation of the data confusing? Give us your feedback!
+### What about clock skews across different component instances?
-*What about clock skews across different component instances?*
+Timelines are adjusted for clock skews in the transaction chart. You can see the exact timestamps in the details pane or by using Log Analytics.
-Timelines are adjusted for clock skews in the transaction chart. You can see the exact timestamps in the details pane or by using Analytics.
+### Why is the new experience missing most of the related items queries?
-*Why is the new experience missing most of the related items queries?*
+This behavior is by design. All the related items, across all components, are already available on the left side in the top and bottom sections. The new experience has two related items that the left side doesn't cover: all telemetry from five minutes before and after this event and the user timeline.
-This is by design. All of the related items, across all components, are already available on the left side (top and bottom sections). The new experience has two related items that the left side doesn't cover: all telemetry from five minutes before and after this event and the user timeline.
+### Is there a way to see fewer events per transaction when I use the Application Insights JavaScript SDK?
-*I see more events than expected in the transaction diagnostics experience when using the Application Insights JavaScript SDK. Is there a way to see fewer events per transaction?*
+The transaction diagnostics experience shows all telemetry in a [single operation](correlation.md#data-model-for-telemetry-correlation) that shares an [Operation ID](data-model-context.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a single-page application (SPA), only one page view event will be generated and a single Operation ID will be used for all telemetry generated. As a result, many events might be correlated to the same operation.
-The transaction diagnostics experience shows all telemetry in a [single operation](correlation.md#data-model-for-telemetry-correlation) that share an [Operation ID](data-model-context.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a Single Page Application (SPA), only one page view event will be generated and a single Operation ID will be used for all telemetry generated, this can result in many events being correlated to the same operation. In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your single page app. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation ID, you can do so by calling `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event will also reset the Operation ID.
+In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your SPA. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so that a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation ID, call `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event also resets the Operation ID.
-*Why do transaction detail durations not add up to the top-request duration?*
+### Why do transaction detail durations not add up to the top-request duration?
-Time not explained in the gantt chart, is time that isn't covered by a tracked dependency.
-This can be due to either external calls that weren't instrumented (automatically or manually), or that the time taken was in process rather than because of an external call.
+Time not explained in the Gantt chart is time that isn't covered by a tracked dependency. This issue can occur because external calls weren't instrumented, either automatically or manually. It can also occur because the time taken was in process rather than because of an external call.
If all calls were instrumented, in process is the likely root cause for the time spent. A useful tool for diagnosing the process is the [Application Insights profiler](./profiler.md).
azure-monitor Tutorial App Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-app-dashboards.md
Title: Create custom dashboards in Azure Application Insights | Microsoft Docs
-description: Tutorial to create custom KPI dashboards using Azure Application Insights.
+ Title: Create custom dashboards in Application Insights | Microsoft Docs
+description: This tutorial shows you how to create custom KPI dashboards using Application Insights.
Last updated 09/30/2020
-# Create custom KPI dashboards using Azure Application Insights
+# Create custom KPI dashboards using Application Insights
-You can create multiple dashboards in the Azure portal that each include tiles visualizing data from multiple Azure resources across different resource groups and subscriptions. You can pin different charts and views from Azure Application Insights to create custom dashboards that provide you with complete picture of the health and performance of your application. This tutorial walks you through the creation of a custom dashboard that includes multiple types of data and visualizations from Azure Application Insights.
+You can create multiple dashboards in the Azure portal that include tiles visualizing data from multiple Azure resources across different resource groups and subscriptions. You can pin different charts and views from Application Insights to create custom dashboards that provide you with a complete picture of the health and performance of your application. This tutorial walks you through the creation of a custom dashboard that includes multiple types of data and visualizations from Application Insights.
You learn how to: > [!div class="checklist"]
-> * Create a custom dashboard in Azure
-> * Add a tile from the Tile Gallery
-> * Add standard metrics in Application Insights to the dashboard
-> * Add a custom metric chart Application Insights to the dashboard
-> * Add the results of a Logs (Analytics) query to the dashboard
+> * Create a custom dashboard in Azure.
+> * Add a tile from the **Tile Gallery**.
+> * Add standard metrics in Application Insights to the dashboard.
+> * Add a custom metric chart based on Application Insights to the dashboard.
+> * Add the results of a Log Analytics query to the dashboard.
## Prerequisites To complete this tutorial: -- Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md).
+- Deploy a .NET application to Azure.
+- Enable the [Application Insights SDK](../app/asp-net.md).
> [!NOTE] > Required permissions for working with dashboards are discussed in the article on [understanding access control for dashboards](../../azure-portal/azure-portal-dashboard-share-access.md#understanding-access-control-for-dashboards). ## Sign in to Azure
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a new dashboard > [!WARNING]
-> If you move your Application Insights resource over to a different resource group or subscription, you will need to manually update the dashboard by removing the old tiles and pinning new tiles from the same Application Insights resource at new location.
+> If you move your Application Insights resource over to a different resource group or subscription, you'll need to manually update the dashboard by removing the old tiles and pinning new tiles from the same Application Insights resource at the new location.
-A single dashboard can contain resources from multiple applications, resource groups, and subscriptions. Start the tutorial by creating a new dashboard for your application.
+A single dashboard can contain resources from multiple applications, resource groups, and subscriptions. Start the tutorial by creating a new dashboard for your application.
-1. In the menu dropdown on the left in Azure portal, select **Dashboard**.
+1. In the menu dropdown on the left in the Azure portal, select **Dashboard**.
- ![Azure Portal menu dropdown](media/tutorial-app-dashboards/dashboard-from-menu.png)
+ ![Screenshot that shows the Azure portal menu dropdown.](media/tutorial-app-dashboards/dashboard-from-menu.png)
-2. On the dashboard pane, select **New dashboard** then **Blank dashboard**.
+1. On the **Dashboard** pane, select **New dashboard** > **Blank dashboard**.
- ![New dashboard](media/tutorial-app-dashboards/new-dashboard.png)
+ ![Screenshot that shows the Dashboard pane.](media/tutorial-app-dashboards/new-dashboard.png)
-3. Type a name for the dashboard.
-4. Have a look at the **Tile Gallery** for a variety of tiles that you can add to your dashboard. In addition to adding tiles from the gallery, you can pin charts and other views directly from Application Insights to the dashboard.
-5. Locate the **Markdown** tile and drag it on to your dashboard. This tile allows you to add text formatted in markdown, which is ideal for adding descriptive text to your dashboard. To learn more, see [Use a markdown tile on Azure dashboards to show custom content](../../azure-portal/azure-portal-markdown-tile.md).
-6. Add text to the tile's properties and resize it on the dashboard canvas.
+1. Enter a name for the dashboard.
+1. Look at the **Tile Gallery** for various tiles that you can add to your dashboard. You can also pin charts and other views directly from Application Insights to the dashboard.
+1. Locate the **Markdown** tile and drag it on to your dashboard. With this tile, you can add text formatted in Markdown, which is ideal for adding descriptive text to your dashboard. To learn more, see [Use a Markdown tile on Azure dashboards to show custom content](../../azure-portal/azure-portal-markdown-tile.md).
+1. Add text to the tile's properties and resize it on the dashboard canvas.
- [![Edit markdown tile](media/tutorial-app-dashboards/markdown.png)](media/tutorial-app-dashboards/markdown.png#lightbox)
+ [![Screenshot that shows the Edit Markdown tile.](media/tutorial-app-dashboards/markdown.png)](media/tutorial-app-dashboards/markdown.png#lightbox)
-7. Select **Done customizing** at the top of the screen to exit tile customization mode.
+1. Select **Done customizing** at the top of the screen to exit tile customization mode.
## Add health overview
-A dashboard with static text isn't very interesting, so now add a tile from Application Insights to show information about your application. You can add Application Insights tiles from the Tile Gallery, or you can pin them directly from Application Insights screens. This allows you to configure charts and views that you're already familiar with before pinning them to your dashboard. Start by adding the standard health overview for your application. This requires no configuration and allows minimal customization in the dashboard.
+A dashboard with static text isn't very interesting, so add a tile from Application Insights to show information about your application. You can add Application Insights tiles from the **Tile Gallery**. You can also pin them directly from Application Insights screens. In this way, you can configure charts and views that you're already familiar with before you pin them to your dashboard.
+Start by adding the standard health overview for your application. This tile requires no configuration and allows minimal customization in the dashboard.
1. Select your **Application Insights** resource on the home screen.
-2. In the **Overview** pane, select the pin icon ![pin icon](media/tutorial-app-dashboards/pushpin.png) to add the tile to a dashboard.
-3. In the "Pin to dashboard" tab, select which dashboard to add the tile to or create a new one.
-
-3. In the top right, a notification will appear that your tile was pinned to your dashboard. Select **Pinned to dashboard** in the notification to return to your dashboard or use the dashboard pane.
-4. That tile is now added to your dashboard. Select **Edit** to change the positioning of the tile. Select and drag it into position and then select **Done customizing**. Your dashboard now has a tile with some useful information.
+1. On the **Overview** pane, select the pin icon ![pin icon](media/tutorial-app-dashboards/pushpin.png) to add the tile to a dashboard.
+1. On the **Pin to dashboard** tab, select which dashboard to add the tile to or create a new one.
+1. At the top right, a notification appears that your tile was pinned to your dashboard. Select **Pinned to dashboard** in the notification to return to your dashboard or use the **Dashboard** pane.
+1. Select **Edit** to change the positioning of the tile you added to your dashboard. Select and drag it into position and then select **Done customizing**. Your dashboard now has a tile with some useful information.
- [![Dashboard in edit mode](media/tutorial-app-dashboards/dashboard-edit-mode.png)](media/tutorial-app-dashboards/dashboard-edit-mode.png#lightbox)
+ [![Screenshot that shows the dashboard in edit mode.](media/tutorial-app-dashboards/dashboard-edit-mode.png)](media/tutorial-app-dashboards/dashboard-edit-mode.png#lightbox)
## Add custom metric chart
-The **Metrics** panel allows you to graph a metric collected by Application Insights over time with optional filters and grouping. Like everything else in Application Insights, you can add this chart to the dashboard. This does require you to do a little customization first.
+You can use the **Metrics** panel to graph a metric collected by Application Insights over time with optional filters and grouping. Like everything else in Application Insights, you can add this chart to the dashboard. This step does require you to do a little customization first.
-1. Select your **Application Insights** resource in the home screen.
-1. Select **Metrics**.
-2. An empty chart has already been created, and you're prompted to add a metric. Add a metric to the chart and optionally add a filter and a grouping. The example below shows the number of server requests grouped by success. This gives a running view of successful and unsuccessful requests.
+1. Select your **Application Insights** resource on the home screen.
+1. Select **Metrics**.
+1. An empty chart appears, and you're prompted to add a metric. Add a metric to the chart and optionally add a filter and a grouping. The following example shows the number of server requests grouped by success. This chart gives a running view of successful and unsuccessful requests.
- [![Add metric](media/tutorial-app-dashboards/metrics.png)](media/tutorial-app-dashboards/metrics.png#lightbox)
+ [![Screenshot that shows adding a metric.](media/tutorial-app-dashboards/metrics.png)](media/tutorial-app-dashboards/metrics.png#lightbox)
-4. Select **Pin to dashboard** on the right.
+1. Select **Pin to dashboard** on the right.
-3. In the top right, a notification will appear that your tile was pinned to your dashboard. Select **Pinned to dashboard** in the notification to return to your dashboard or use the dashboard tab.
+1. In the top right, a notification appears that your tile was pinned to your dashboard. Select **Pinned to dashboard** in the notification to return to your dashboard or use the dashboard tab.
-4. That tile is now added to your dashboard. Select **Edit** to change the positioning of the tile. Select and drag the tile into position and then select **Done customizing**.
+1. That tile is now added to your dashboard. Select **Edit** to change the positioning of the tile. Select and drag the tile into position and then select **Done customizing**.
-## Add Logs query
+## Add a logs query
-Azure Application Insights Logs provides a rich query language that allows you to analyze all of the data collected Application Insights. Just like charts and other views, you can add the output of a logs query to your dashboard.
+Application Insights Logs provides a rich query language that you can use to analyze all the data collected by Application Insights. Like with charts and other views, you can add the output of a logs query to your dashboard.
1. Select your **Application Insights** resource in the home screen.
-2. Select **Logs** on the left under "monitoring" to open the Logs tab.
-3. Type the following query, which returns the top 10 most requested pages and their request count:
+1. On the left under **Monitoring**, select **Logs** to open the **Logs** tab.
+1. Enter the following query, which returns the top 10 most requested pages and their request count:
``` Kusto requests
Azure Application Insights Logs provides a rich query language that allows you t
| take 10 ```
-4. Select **Run** to validate the results of the query.
-5. Select the pin icon ![Pin icon](media/tutorial-app-dashboards/pushpin.png) and select the name of your dashboard.
+1. Select **Run** to validate the results of the query.
+1. Select the pin icon ![Pin icon](media/tutorial-app-dashboards/pushpin.png) and then select the name of your dashboard.
-5. Before you go back to the dashboard, add another query, but render it as a chart so you see the different ways to visualize a logs query in a dashboard. Start with the following query that summarizes the top 10 operations with the most exceptions.
+1. Before you go back to the dashboard, add another query, but render it as a chart. Now you'll see the different ways to visualize a logs query in a dashboard. Start with the following query that summarizes the top 10 operations with the most exceptions:
``` Kusto exceptions
Azure Application Insights Logs provides a rich query language that allows you t
| take 10 ```
-6. Select **Chart** and then change to a **Doughnut** to visualize the output.
+1. Select **Chart** and then select **Doughnut** to visualize the output.
- [![Doughnut chart with above query](media/tutorial-app-dashboards/logs-doughnut.png)](media/tutorial-app-dashboards/logs-doughnut.png#lightbox)
+ [![Screenshot that shows the doughnut chart with the preceding query.](media/tutorial-app-dashboards/logs-doughnut.png)](media/tutorial-app-dashboards/logs-doughnut.png#lightbox)
-6. Select the pin icon ![Pin icon](media/tutorial-app-dashboards/pushpin.png) on the top right to pin the chart to your dashboard and then return to your dashboard.
-7. The results of the queries are now added to your dashboard in the format that you selected. Select and drag each into position and then select **Done customizing**.
-8. Select the pencil icon ![Pencil icon](media/tutorial-app-dashboards/pencil.png) on each title to give them a descriptive title.
+1. Select the pin icon ![Pin icon](media/tutorial-app-dashboards/pushpin.png) at the top right to pin the chart to your dashboard. Then return to your dashboard.
+1. The results of the queries are added to your dashboard in the format that you selected. Select and drag each result into position. Then select **Done customizing**.
+1. Select the pencil icon ![Pencil icon](media/tutorial-app-dashboards/pencil.png) on each title and use it to make the titles descriptive.
## Share dashboard 1. At the top of the dashboard, select **Share** to publish your changes.
-2. You can optionally define specific users who should have access to the dashboard. For more information, see [Share Azure dashboards by using Azure role-based access control](../../azure-portal/azure-portal-dashboard-share-access.md).
-3. Select **Publish**.
+1. You can optionally define specific users who should have access to the dashboard. For more information, see [Share Azure dashboards by using Azure role-based access control](../../azure-portal/azure-portal-dashboard-share-access.md).
+1. Select **Publish**.
## Next steps
-Now that you've learned how to create custom dashboards, have a look at the rest of the Application Insights documentation including a case study.
+In this tutorial, you learned how to create custom dashboards. Now look at the rest of the Application Insights documentation, which also includes a case study.
> [!div class="nextstepaction"] > [Deep diagnostics](../app/devops.md)
azure-monitor Autoscale Multiprofile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-multiprofile.md
+
+ Title: Autoscale with multiple profiles
+description: "Using multiple and recurring profiles in autoscale"
+++++ Last updated : 09/30/2022+++
+# Customer intent: As a user or dev ops administrator, I want to understand how set up autoscale with more than one profile so I can scale my resources with more flexibility.
++
+# Autoscale with multiple profiles
+
+Scaling your resources for a particular day of the week, or a specific date and time can reduce your costs while still providing the capacity you need when you need it.
+
+You can use multiple profiles in autoscale to scale in different ways at different times. If for example, your business isn't active on the weekend, create a recurring profile to scale in your resources on Saturdays and Sundays. If black Friday is a busy day, create a profile to automatically scale out your resources on black Friday.
+
+This article explains the different profiles in autoscale and how to use them.
+
+You can have one or more profiles in your autoscale setting.
+
+There are three types of profile:
+
+* The default profile. The default profile is created automatically and isn't dependent on a schedule. The default profile can't be deleted. The default profile is used when there are no other profiles that match the current date and time.
+* Recurring profiles. A recurring profile is valid for a specific time range and repeats for selected days of the week.
+* Fixed date and time profiles. A profile that is valid for a time range on a specific date.
+
+Each time the autoscale service runs, the profiles are evaluated in the following order:
+
+1. Fixed date profiles
+1. Recurring profiles
+1. Default profile
+
+If a profile's date and time settings match the current time, autoscale will apply that profile's rules and capacity limits. Only the first applicable profile is used.
+
+The example below shows an autoscale setting with a default profile and recurring profile.
++
+In the above example, on Monday after 6 AM, the recurring profile will be used. If the instance count is less than 3, autoscale scales to the new minimum of three. Autoscale continues to use this profile and scales based on CPU% until Monday at 6 PM. At all other times scaling will be done according to the default profile, based on the number of requests. After 6 PM on Monday, autoscale switches to the default profile. If for example, the number of instances at the time is 12, autoscale scales in to 10, which the maximum allowed for the default profile.
+
+## Multiple profiles using templates, CLI, and PowerShell
+
+When creating multiple profiles using templates, the CLI, and PowerShell, follow the guidelines below.
+
+## [ARM templates](#tab/templates)
+
+Follow the rules below when using ARM templates to create autoscale settings with multiple profiles:
+
+See the autoscale section of the [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings) for a full template reference.
+
+* Create a default profile for each recurring profile. If you have two recurring profiles, create two matching default profiles.
+* The default profile must contain a `recurrence` section that is the same as the recurring profile, with the `hours` and `minutes` elements set for the end time of the recurring profile. If you don't specify a recurrence with a start time for the default profile, the last recurrence rule will remain in effect.
+* The `name` element for the default profile is an object with the following format: `"name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Recurring profile name\"}"` where the recurring profile name is the value of the `name` element for the recurring profile. If the name isn't specified correctly, the default profile will appear as another recurring profile.
+ *The rules above don't apply for non-recurring scheduled profiles.
+
+## Add a recurring profile using AIM templates
+
+The example below shows how to create two recurring profiles. One profile for weekends between 06:00 and 19:00, Saturday and Sunday, and a second for Mondays between 04:00 and 15:00. Note the two default profiles, one for each recurring profile.
+
+Use the following command to deploy the template:
+` az deployment group create --name VMSS1-Autoscale-607 --resource-group rg-vmss1 --template-file VMSS1-autoscale.json`
+where *VMSS1-autoscale.json* is the the file containing the JSON object below.
+
+``` JSON
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Insights/autoscaleSettings",
+ "apiVersion": "2015-04-01",
+ "name": "VMSS1-Autoscale-607",
+ "location": "eastus",
+ "properties": {
+
+ "name": "VMSS1-Autoscale-607",
+ "enabled": true,
+ "targetResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "profiles": [
+ {
+ "name": "Monday profile",
+ "capacity": {
+ "minimum": "3",
+ "maximum": "20",
+ "default": "3"
+ },
+ "rules": [
+ {
+ "scaleAction": {
+ "direction": "Increase",
+ "type": "ChangeCount",
+ "value": "1",
+ "cooldown": "PT5M"
+ },
+ "metricTrigger": {
+ "metricName": "Inbound Flows",
+ "metricNamespace": "microsoft.compute/virtualmachinescalesets",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "operator": "GreaterThan",
+ "statistic": "Average",
+ "threshold": 100,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT10M",
+ "Dimensions": [],
+ "dividePerInstance": true
+ }
+ },
+ {
+ "scaleAction": {
+ "direction": "Decrease",
+ "type": "ChangeCount",
+ "value": "1",
+ "cooldown": "PT5M"
+ },
+ "metricTrigger": {
+ "metricName": "Inbound Flows",
+ "metricNamespace": "microsoft.compute/virtualmachinescalesets",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "operator": "LessThan",
+ "statistic": "Average",
+ "threshold": 60,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT10M",
+ "Dimensions": [],
+ "dividePerInstance": true
+ }
+ }
+ ],
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "E. Europe Standard Time",
+ "days": [
+ "Monday"
+ ],
+ "hours": [
+ 4
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ }
+ },
+ {
+ "name": "Weekend profile",
+ "capacity": {
+ "minimum": "1",
+ "maximum": "3",
+ "default": "1"
+ },
+ "rules": [],
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "E. Europe Standard Time",
+ "days": [
+ "Saturday",
+ "Sunday"
+ ],
+ "hours": [
+ 6
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ }
+ },
+ {
+ "name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Weekend profile\"}",
+ "capacity": {
+ "minimum": "2",
+ "maximum": "10",
+ "default": "2"
+ },
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "E. Europe Standard Time",
+ "days": [
+ "Saturday",
+ "Sunday"
+ ],
+ "hours": [
+ 19
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ },
+ "rules": [
+ {
+ "scaleAction": {
+ "direction": "Increase",
+ "type": "ChangeCount",
+ "value": "1",
+ "cooldown": "PT3M"
+ },
+ "metricTrigger": {
+ "metricName": "Percentage CPU",
+ "metricNamespace": "microsoft.compute/virtualmachinescalesets",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "operator": "GreaterThan",
+ "statistic": "Average",
+ "threshold": 50,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT1M",
+ "Dimensions": [],
+ "dividePerInstance": false
+ }
+ },
+ {
+ "scaleAction": {
+ "direction": "Decrease",
+ "type": "ChangeCount",
+ "value": "1",
+ "cooldown": "PT3M"
+ },
+ "metricTrigger": {
+ "metricName": "Percentage CPU",
+ "metricNamespace": "microsoft.compute/virtualmachinescalesets",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "operator": "LessThan",
+ "statistic": "Average",
+ "threshold": 39,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT3M",
+ "Dimensions": [],
+ "dividePerInstance": false
+ }
+ }
+ ]
+ },
+ {
+ "name": "{\"name\":\"Auto created default scale condition\",\"for\":\"Monday profile\"}",
+ "capacity": {
+ "minimum": "2",
+ "maximum": "10",
+ "default": "2"
+ },
+ "recurrence": {
+ "frequency": "Week",
+ "schedule": {
+ "timeZone": "E. Europe Standard Time",
+ "days": [
+ "Monday"
+ ],
+ "hours": [
+ 15
+ ],
+ "minutes": [
+ 0
+ ]
+ }
+ },
+ "rules": [
+ {
+ "scaleAction": {
+ "direction": "Increase",
+ "type": "ChangeCount",
+ "value": "1",
+ "cooldown": "PT3M"
+ },
+ "metricTrigger": {
+ "metricName": "Percentage CPU",
+ "metricNamespace": "microsoft.compute/virtualmachinescalesets",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "operator": "GreaterThan",
+ "statistic": "Average",
+ "threshold": 50,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT1M",
+ "Dimensions": [],
+ "dividePerInstance": false
+ }
+ },
+ {
+ "scaleAction": {
+ "direction": "Decrease",
+ "type": "ChangeCount",
+ "value": "1",
+ "cooldown": "PT3M"
+ },
+ "metricTrigger": {
+ "metricName": "Percentage CPU",
+ "metricNamespace": "microsoft.compute/virtualmachinescalesets",
+ "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "operator": "LessThan",
+ "statistic": "Average",
+ "threshold": 39,
+ "timeAggregation": "Average",
+ "timeGrain": "PT1M",
+ "timeWindow": "PT3M",
+ "Dimensions": [],
+ "dividePerInstance": false
+ }
+ }
+ ]
+ }
+ ],
+ "notifications": [],
+ "targetResourceLocation": "eastus"
+ }
+
+ }
+ ]
+}
+
+```
+
+## [CLI](#tab/cli)
+
+The CLI can be used to create multiple profiles in your autoscale settings.
+
+See the [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest) for the full set of autoscale CLI commands.
+
+The following steps show how to create a recurring autoscale profile using the CLI.
+
+1. Create the recurring profile using `az monitor autoscale profile create`. Specify the `--start` and `--end` time and the `--recurrence`
+1. Create a scale out rule using `az monitor autoscale rule create` using `--scale out`
+1. Create a scale in rule using `az monitor autoscale rule create` using `--scale in`
+
+## Add a recurring profile using CLI
+
+The example below shows how to add a recurring autoscale profile, recurring on Thursdays between 06:00 and 22:50.
+
+``` azurecli
+
+az monitor autoscale profile create --autoscale-name VMSS1-Autoscale-607 --count 2 --max-count 10 --min-count 1 --name Thursdays --recurrence week thu --resource-group rg-vmss1 --start 06:00 --end 22:50 --timezone "Pacific Standard Time"
+
+az monitor autoscale rule create -g rg-vmss1 --autoscale-name VMSS1-Autoscale-607 --scale in 1 --condition "Percentage CPU < 25 avg 5m" --profile-name Thursdays
+
+az monitor autoscale rule create -g rg-vmss1 --autoscale-name VMSS1-Autoscale-607 --scale out 2 --condition "Percentage CPU > 50 avg 5m" --profile-name Thursdays
+```
+
+> [!NOTE]
+> The JSON for your autoscale default profile is modified by adding a recurring profile.
+> The `name` element of the default profile is changed to an object in the format: `"name": "{\"name\":\"Auto created default scale condition\",\"for\":\"recurring profile\"}"` where *recurring profile* is the profile name of your recurring profile.
+> The default profile also has a recurrence clause added to it that starts at the end time specified for the new recurring profile.
+> A new default profile is created for each recurring profile.
+
+## Updating the default profile when you have recurring profiles
+
+After you add recurring profiles, your default profile is renamed. If you have multiple recurring profiles and want to update your default profile, the update must be made to each default profile corresponding to a recurring profile.
+
+For example, if you have two recurring profiles called *Wednesdays* and *Thursdays*, you need two commands to add a rule to the default profile.
+
+```azurecli
+az monitor autoscale rule create -g rg-vmss1--autoscale-name VMSS1-Autoscale-607 --scale out 8 --condition "Percentage CPU > 52 avg 5m" --profile-name "{\"name\": \"Auto created default scale condition\", \"for\": \"Wednesdays\"}"
+
+az monitor autoscale rule create -g rg-vmss1--autoscale-name VMSS1-Autoscale-607 --scale out 8 --condition "Percentage CPU > 52 avg 5m" --profile-name "{\"name\": \"Auto created default scale condition\", \"for\": \"Thursdays\"}"
+```
+
+## [PowerShell](#tab/powershell)
+
+PowerShell can be used to create multiple profiles in your autoscale settings.
+
+See the [PowerShell Az.Monitor Reference ](https://learn.microsoft.com/powershell/module/az.monitor/#monitor) for the full set of autoscale PowerShell commands.
+
+The following steps show how to create an autoscale profile using PowerShell.
+
+1. Create rules using `New-AzAutoscaleRule`.
+1. Create profiles using `New-AzAutoscaleProfile` using the rules from the previous step.
+1. Use `Add-AzAutoscaleSetting` to apply the profiles to your autoscale setting.
+
+## Add a recurring profile using PowerShell
+
+The example below shows how to create default profile and a recurring autoscale profile, recurring on Wednesdays and Fridays between 07:00 and 19:00.
+The default profile uses the `CpuIn` and `CpuOut` Rules. The recurring profile uses the `HTTPRuleIn` and `HTTPRuleOut` rules
+
+```azurepowershell
+$ResourceGroup="rg-001"
+$TargetResourceId="/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourcegroups/rg-001/providers/Microsoft.Web/serverFarms/ScaleableAppServicePlan"
+
+$ScaleSettingName="MultipleProfiles-001"
+
+$CpuOut = New-AzAutoscaleRule -MetricName "CpuPercentage" -MetricResourceId $TargetResourceId -Operator GreaterThan -MetricStatistic Average -Threshold 50 -TimeGrain 00:01:00 -ScaleActionCooldown 00:05:00 -ScaleActionDirection Increase -ScaleActionScaleType ChangeCount -ScaleActionValue "1"
+
+$CpuIn = New-AzAutoscaleRule -MetricName "CpuPercentage" -MetricResourceId $TargetResourceId -Operator GreaterThan -MetricStatistic Average -Threshold 30 -TimeGrain 00:01:00 -ScaleActionCooldown 00:05:00 -ScaleActionDirection Decrease -ScaleActionScaleType ChangeCount -ScaleActionValue "1"
+
+$DefaultProfile = New-AzAutoscaleProfile -DefaultCapacity "1" -MaximumCapacity "10" -MinimumCapacity "1" -Rule $CpuOut,$CpuIn -Name '{"name":"Default scale condition","for":"WednesdaysFridays"}' -RecurrenceFrequency week -ScheduleDay "Wednesday","Friday" -ScheduleHour 19 -ScheduleMinute 00 -ScheduleTimeZone "Pacific Standard Time"`
+
+$HTTPRuleIn = New-AzAutoscaleRule -MetricName "HttpQueueLength" -MetricResourceId $TargetResourceId -Operator GreaterThan -MetricStatistic Average -Threshold 3 -TimeGrain 00:01:00 -ScaleActionCooldown 00:05:00 -ScaleActionDirection Decrease -ScaleActionScaleType ChangeCount -ScaleActionValue "1"
+
+$HTTPRuleOut = New-AzAutoscaleRule -MetricName "HttpQueueLength" -MetricResourceId $TargetResourceId -Operator GreaterThan -MetricStatistic Average -Threshold 10 -TimeGrain 00:01:00 -ScaleActionCooldown 00:05:00 -ScaleActionDirection Increase -ScaleActionScaleType ChangeCount -ScaleActionValue "1"
+
+$RecurringProfile=New-AzAutoscaleProfile -Name WednesdaysFridays -DefaultCapacity 2 -MaximumCapacity 12 -MinimumCapacity 2 -RecurrenceFrequency week -ScheduleDay "Wednesday","Friday" -ScheduleHour 7 -ScheduleMinute 00 -ScheduleTimeZone "Pacific Standard Time" -Rule $HTTPRuleOut, $HTTPRuleIn
+
+Add-AzAutoscaleSetting -Location "West Central US" -name $ScaleSettingName -ResourceGroup $ResourceGroup -TargetResourceId $TargetResourceId -AutoscaleProfile $DefaultProfile, $RecurringProfile
+```
+
+> [!NOTE]
+> Each recurring profile must have a corresponding default profile.
+> The `-Name` parameter of the default profile is an object in the format: `'{"name":"Default scale condition","for":"recurring profile"}'` where *recurring profile* is the profile name of the recurring profile.
+> The default profile also has a recurrence parameters which match the recurring profile but it starts at the time you want the recurring profile to end.
+> Create a distinct default profile for each recurring profile.
+
+## Updating the default profile when you have recurring profiles
+
+If you have multiple recurring profiles and want to change your default profile, the change must be made to each default profile corresponding to a recurring profile.
+
+For example, if you have two recurring profiles called *SundayProfile* and *ThursdayProfile*, you need two `New-AzAutoscaleProfile` commands to change to the default profile.
+
+```azurepowershell
++
+$DefaultProfileSundayProfile = New-AzAutoscaleProfile -DefaultCapacity "1" -MaximumCapacity "10" -MinimumCapacity "1" -Rule $CpuOut,$CpuIn -Name '{"name":"Default scale condition","for":"SundayProfile"}' -RecurrenceFrequency week -ScheduleDay "Sunday" -ScheduleHour 19 -ScheduleMinute 00 -ScheduleTimeZone "Pacific Standard Time"`
++
+$DefaultProfileThursdayProfile = New-AzAutoscaleProfile -DefaultCapacity "1" -MaximumCapacity "10" -MinimumCapacity "1" -Rule $CpuOut,$CpuIn -Name '{"name":"Default scale condition","for":"ThursdayProfile"}' -RecurrenceFrequency week -ScheduleDay "Thursday" -ScheduleHour 19 -ScheduleMinute 00 -ScheduleTimeZone "Pacific Standard Time"`
+```
+++
+## Next steps
+
+* [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest)
+* [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings)
+* [PowerShell Az.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
+* [REST API reference. Autoscale Settings](https://learn.microsoft.com/rest/api/monitor/autoscale-settings).
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
The full list of configurable fields and descriptions is available in the [Autos
For code examples, see
-* [Advanced Autoscale configuration using Resource Manager templates for virtual machine scale sets](autoscale-virtual-machine-scale-sets.md)
-* [Autoscale REST API](/rest/api/monitor/autoscalesettings)
-
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
## Horizontal vs vertical scaling Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a virtual machine scale set, scaling out means adding more virtual machines Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
The following services are supported by autoscale:
To learn more about autoscale, see the following resources: * [Azure Monitor autoscale common metrics](autoscale-common-metrics.md)
-* [Scale virtual machine scale sets](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
-* [Autoscale using Resource Manager templates for virtual machine scale sets](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
-* [Best practices for Azure Monitor autoscale](autoscale-best-practices.md)
* [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md)
-* [Autoscale REST API](/rest/api/monitor/autoscalesettings)
-* [Troubleshooting virtual machine scale sets and autoscale](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md)
-* [Troubleshooting Azure Monitor autoscale](./autoscale-troubleshoot.md)
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-template)
+* [Tutorial: Automatically scale a virtual machine scale set with the Azure CLI](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-cli)
+* [Tutorial: Automatically scale a virtual machine scale set with an Azure template](https://learn.microsoft.com/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell)
+* [Autoscale CLI reference](https://learn.microsoft.com/cli/azure/monitor/autoscale?view=azure-cli-latest)
+* [ARM template resource definition](https://learn.microsoft.com/azure/templates/microsoft.insights/autoscalesettings)
+* [PowerShell Az.Monitor Reference](https://learn.microsoft.com/powershell/module/az.monitor/#monitor)
+* [REST API reference. Autoscale Settings](https://learn.microsoft.com/rest/api/monitor/autoscale-settings).
azure-monitor Autoscale Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-virtual-machine-scale-sets.md
- Title: Advanced Autoscale using Azure Virtual Machines
-description: Uses Resource Manager and VM scale sets with multiple rules and profiles, which send email and call webhook URLs with scale actions.
----- Previously updated : 06/25/2020-----
-# Advanced autoscale configuration using Resource Manager templates for VM Scale Sets
-You can scale-in and scale-out in Virtual Machine Scale Sets based on performance metric thresholds, by a recurring schedule, or by a particular date. You can also configure email and webhook notifications for scale actions. This walkthrough shows an example of configuring all these objects using a Resource Manager template on a VM Scale Set.
-
-> [!NOTE]
-> While this walkthrough explains the steps for VM Scale Sets, the same information applies to autoscaling [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md)
-> For a simple scale in/out setting on a VM Scale Set based on a simple performance metric such as CPU, refer to the [Linux](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md) and [Windows](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md) documents
->
->
-
-## Walkthrough
-In this walkthrough, we use [Azure Resource Explorer](https://resources.azure.com/) to configure and update the autoscale setting for a scale set. Azure Resource Explorer is an easy way to manage Azure resources via Resource Manager templates. If you are new to Azure Resource Explorer tool, read [this introduction](https://azure.microsoft.com/blog/azure-resource-explorer-a-new-tool-to-discover-the-azure-api/).
-
-1. Deploy a new scale set with a basic autoscale setting. This article uses the one from the Azure QuickStart Gallery, which has a Windows scale set with a basic autoscale template. Linux scale sets work the same way.
-2. After the scale set is created, navigate to the scale set resource from Azure Resource Explorer. You see the following under Microsoft.Insights node.
-
- ![Azure Explorer](media/autoscale-virtual-machine-scale-sets/azure_explorer_navigate.png)
-
- The template execution has created a default autoscale setting with the name **'autoscalewad'**. On the right-hand side, you can view the full definition of this autoscale setting. In this case, the default autoscale setting comes with a CPU% based scale-out and scale-in rule.
-
-3. You can now add more profiles and rules based on the schedule or specific requirements. We create an autoscale setting with three profiles. To understand profiles and rules in autoscale, review [Autoscale Best Practices](autoscale-best-practices.md).
-
- | Profiles & Rules | Description |
- | | |
- | **Profile** |**Performance/metric based** |
- | Rule |Service Bus Queue Message Count > x |
- | Rule |Service Bus Queue Message Count < y |
- | Rule |CPU% > n |
- | Rule |CPU% < p |
- | **Profile** |**Weekday morning hours (no rules)** |
- | **Profile** |**Product Launch day (no rules)** |
-
-4. Here is a hypothetical scaling scenario that we use for this walk-through.
-
- * **Load based** - I'd like to scale out or in based on the load on my application hosted on my scale set.*
- * **Message Queue size** - I use a Service Bus Queue for the incoming messages to my application. I use the queue's message count and CPU% and configure a default profile to trigger a scale action if either of message count or CPU hits the threshold.\*
- * **Time of week and day** - I want a weekly recurring 'time of the day' based profile called 'Weekday Morning Hours'. Based on historical data, I know it is better to have certain number of VM instances to handle my application's load during this time.\*
- * **Special Dates** - I added a 'Product Launch Day' profile. I plan ahead for specific dates so my application is ready to handle the load due marketing announcements and when we put a new product in the application.\*
- * *The last two profiles can also have other performance metric based rules within them. In this case, I decided not to have one and instead to rely on the default performance metric based rules. Rules are optional for the recurring and date-based profiles.*
-
- Autoscale engine's prioritization of the profiles and rules is also captured in the [autoscaling best practices](autoscale-best-practices.md) article.
- For a list of common metrics for autoscale, refer [Common metrics for Autoscale](autoscale-common-metrics.md)
-
-5. Make sure you are on the **Read/Write** mode in Resource Explorer
-
- ![Autoscalewad, default autoscale setting](media/autoscale-virtual-machine-scale-sets/autoscalewad.png)
-
-6. Click Edit. **Replace** the 'profiles' element in autoscale setting with the following configuration:
-
- ![Screenshot shows the profiles element.](media/autoscale-virtual-machine-scale-sets/profiles.png)
-
- ```
- {
- "name": "Perf_Based_Scale",
- "capacity": {
- "minimum": "2",
- "maximum": "12",
- "default": "2"
- },
- "rules": [
- {
- "metricTrigger": {
- "metricName": "MessageCount",
- "metricNamespace": "",
- "metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.ServiceBus/namespaces/mySB/queues/myqueue",
- "timeGrain": "PT5M",
- "statistic": "Average",
- "timeWindow": "PT5M",
- "timeAggregation": "Average",
- "operator": "GreaterThan",
- "threshold": 10
- },
- "scaleAction": {
- "direction": "Increase",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT5M"
- }
- },
- {
- "metricTrigger": {
- "metricName": "MessageCount",
- "metricNamespace": "",
- "metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.ServiceBus/namespaces/mySB/queues/myqueue",
- "timeGrain": "PT5M",
- "statistic": "Average",
- "timeWindow": "PT5M",
- "timeAggregation": "Average",
- "operator": "LessThan",
- "threshold": 3
- },
- "scaleAction": {
- "direction": "Decrease",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT5M"
- }
- },
- {
- "metricTrigger": {
- "metricName": "Percentage CPU",
- "metricNamespace": "",
- "metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachineScaleSets/<this_vmss_name>",
- "timeGrain": "PT5M",
- "statistic": "Average",
- "timeWindow": "PT30M",
- "timeAggregation": "Average",
- "operator": "GreaterThan",
- "threshold": 85
- },
- "scaleAction": {
- "direction": "Increase",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT5M"
- }
- },
- {
- "metricTrigger": {
- "metricName": "Percentage CPU",
- "metricNamespace": "",
- "metricResourceUri": "/subscriptions/s1/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachineScaleSets/<this_vmss_name>",
- "timeGrain": "PT5M",
- "statistic": "Average",
- "timeWindow": "PT30M",
- "timeAggregation": "Average",
- "operator": "LessThan",
- "threshold": 60
- },
- "scaleAction": {
- "direction": "Decrease",
- "type": "ChangeCount",
- "value": "1",
- "cooldown": "PT5M"
- }
- }
- ]
- },
- {
- "name": "Weekday_Morning_Hours_Scale",
- "capacity": {
- "minimum": "4",
- "maximum": "12",
- "default": "4"
- },
- "rules": [],
- "recurrence": {
- "frequency": "Week",
- "schedule": {
- "timeZone": "Pacific Standard Time",
- "days": [
- "Monday",
- "Tuesday",
- "Wednesday",
- "Thursday",
- "Friday"
- ],
- "hours": [
- 6
- ],
- "minutes": [
- 0
- ]
- }
- }
- },
- {
- "name": "Product_Launch_Day",
- "capacity": {
- "minimum": "6",
- "maximum": "20",
- "default": "6"
- },
- "rules": [],
- "fixedDate": {
- "timeZone": "Pacific Standard Time",
- "start": "2016-06-20T00:06:00Z",
- "end": "2016-06-21T23:59:00Z"
- }
- }
- ```
- For supported fields and their values, see [Autoscale REST API documentation](/rest/api/monitor/autoscalesettings). Now your autoscale setting contains the three profiles explained previously.
-
-7. Finally, look at the Autoscale **notification** section. Autoscale notifications allow you to do three things when a scale-out or in action is successfully triggered.
- - Notify the admin and co-admins of your subscription
- - Email a set of users
- - Trigger a webhook call. When fired, this webhook sends metadata about the autoscaling condition and the scale set resource. To learn more about the payload of autoscale webhook, see [Configure Webhook & Email Notifications for Autoscale](autoscale-webhook-email.md).
-
- Add the following to the Autoscale setting replacing your **notification** element whose value is null
-
- ```
- "notifications": [
- {
- "operation": "Scale",
- "email": {
- "sendToSubscriptionAdministrator": true,
- "sendToSubscriptionCoAdministrators": false,
- "customEmails": [
- "user1@mycompany.com",
- "user2@mycompany.com"
- ]
- },
- "webhooks": [
- {
- "serviceUri": "https://foo.webhook.example.com?token=abcd1234",
- "properties": {
- "optional_key1": "optional_value1",
- "optional_key2": "optional_value2"
- }
- }
- ]
- }
- ]
-
- ```
-
- Hit **Put** button in Resource Explorer to update the autoscale setting.
-
-You have updated an autoscale setting on a VM Scale set to include multiple scale profiles and scale notifications.
-
-## Next Steps
-Use these links to learn more about autoscaling.
-
-[TroubleShoot Autoscale with Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md)
-
-[Common Metrics for Autoscale](autoscale-common-metrics.md)
-
-[Best Practices for Azure Autoscale](autoscale-best-practices.md)
-
-[Manage Autoscale using PowerShell](../powershell-samples.md#create-and-manage-autoscale-settings)
-
-[Manage Autoscale using CLI](../cli-samples.md#autoscale)
-
-[Configure Webhook & Email Notifications for Autoscale](autoscale-webhook-email.md)
-
-[Microsoft.Insights/autoscalesettings](/azure/templates/microsoft.insights/autoscalesettings) template reference
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
The following list is the eight metrics per container collected:
The following list is the cluster inventory data collected by default: -- KubePodInventory ΓÇô 1 per minute per container
+- KubePodInventory ΓÇô 1 per pod per minute
- KubeNodeInventory ΓÇô 1 per node per minute - KubeServices ΓÇô 1 per service per minute - ContainerInventory ΓÇô 1 per container per minute
azure-monitor Data Collection Transformations Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations-structure.md
The following [Bitwise operators](/azure/data-explorer/kusto/query/binoperators)
- [isempty](/azure/data-explorer/kusto/query/isemptyfunction) - [isnotempty](/azure/data-explorer/kusto/query/isnotemptyfunction) - [parse_json](/azure/data-explorer/kusto/query/parsejsonfunction)
+- [replace](https://github.com/microsoft/Kusto-Query-Language/blob/master/doc/replacefunction.md)
- [split](/azure/data-explorer/kusto/query/splitfunction) - [strcat](/azure/data-explorer/kusto/query/strcatfunction) - [strcat_delim](/azure/data-explorer/kusto/query/strcat-delimfunction)
The following [Bitwise operators](/azure/data-explorer/kusto/query/binoperators)
- [isnotnull](/azure/data-explorer/kusto/query/isnotnullfunction) - [isnull](/azure/data-explorer/kusto/query/isnullfunction)
-### Identifier quoting
-Use [Identifier quoting](/azure/data-explorer/kusto/query/schema-entities/entity-names?q=identifier#identifier-quoting) as required.
+#### Special functions
+
+##### parse_cef_dictionary
+
+Given a string containing a CEF message, `parse_cef_dictionary` parses the Extension property of the message into a dynamic key/value object. Semicolon is a reserved character that should be replaced prior to passing the raw message into the method, as shown in the example below.
+
+```kusto
+| extend cefMessage=iff(cefMessage contains_cs ";", replace(";", " ", cefMessage), cefMessage)
+| extend parsedCefDictionaryMessage =parse_cef_dictionary(cefMessage)
+| extend parsecefDictionaryExtension = parsedCefDictionaryMessage["Extension"]
+| project TimeGenerated, cefMessage, parsecefDictionaryExtension
+```
+
+### Identifier quoting
+Use [Identifier quoting](/azure/data-explorer/kusto/query/schema-entities/entity-names?q=identifier#identifier-quoting) as required.
## Next steps
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
The following table provides unique requirements for each destination including
| Destination | Requirements | |:|:| | Log Analytics workspace | The workspace doesn't need to be in the same region as the resource being monitored.|
-| Storage account | Don't use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.|
+| Storage account | It is not recommended to use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.|
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.| | Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
azure-monitor Resource Manager Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-manager-diagnostic-settings.md
resource diagnosticSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-pre
} resource blob 'Microsoft.Storage/storageAccounts/blobServices@2021-09-01' existing = {
- name:storageAccountName
+ name:'default'
+ parent:storageAccount
} resource blobSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = if (hasblob) {
resource blobSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview'
} resource table 'Microsoft.Storage/storageAccounts/tableServices@2021-09-01' existing = {
- name:storageAccountName
+ name:'default'
+ parent:storageAccount
} resource tableSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = if (hastable) {
resource tableSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview'
} resource file 'Microsoft.Storage/storageAccounts/fileServices@2021-09-01' existing = {
- name:storageAccountName
+ name:'default'
+ parent:storageAccount
} resource fileSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = if (hasfile) {
resource fileSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview'
} resource queue 'Microsoft.Storage/storageAccounts/queueServices@2021-09-01' existing = {
- name:storageAccountName
+ name:'default'
+ parent:storageAccount
}
azure-netapp-files Configure Application Volume Group Sap Hana Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-application-volume-group-sap-hana-api.md
+
+ Title: Configure application volume groups for SAP HANA REST API | Microsoft Docs
+description: Setting up your application volume groups for the SAP HANA API requires special configurations.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 08/31/2022++
+# Configure application volume groups for the SAP HANA REST API
+
+Application volume group (AVG) enables you to deploy all volumes for a single HANA host in one atomic step. The Azure portal and the Azure Resource Manager template have implemented pre-checks and recommendations for deployment in areas including throughputs and volume naming conventions. As a REST API user, those checks and recommendations are not available.
+
+Without these checks, it's important to understand the requirements for running HANA on Azure NetApp Files and the basic architecture and workflows application volume groups on which are built.
+
+SAP HANA can be installed in a single-host (scale-up) or in a multiple-host (scale-out) configuration. The volumes required for each of the HANA nodes differ for the first HANA node (single-host) and for subsequent HANA hosts (multiple-host). Since an application volume group creates the volumes for a single HANA host, the number and type of volumes created differ for the first HANA host and all subsequent HANA hosts in a multiple-host setup.
+
+Application volume groups allow you to define volume size and throughput according to your specific requirements. To ensure you can customize to your specific needs, you must only use manual QoS capacity pools. According to the SAP HANA certification, only a subset of volume features can be used for the different volumes. Since enterprise applications such as SAP HANA require application-consistent data protection, it's _not_ recommended to configure automated snapshot policies for any of the volumes. Instead consider using specific data protection applications such as [AzAcSnap](azacsnap-introduction.md) or Commvault.
+
+## Rules and restrictions
+
+Using application volume groups requires understanding the rules and restrictions:
+* A single volume group is used to create the volumes for a single HANA host only.
+* In a HANA multiple-host setup (scale-out), you should start with the volume group for the first HANA host and continue host by host.
+* HANA requires different volume types for the first HANA host and all additional multiple-hosts hosts you add.
+* Available volume types are: data, log, shared, log-backup, and data-backup.
+* The first node can have all five different volumes (one for each type).
+ * data, log and shared volumes must be provided
+ * log-backup and data-backup are optional, as you may choose to use a central share to store the backups or even use `backint` for the log-backup
+* All additional hosts in a multiple-host setup may only add one data and one log volume each.
+* For data, log and shared volumes, SAP HANA certification requires NFSv4.1 protocol.
+* Log-backup and file-backup volumes, if created optionally with the volume group of the first HANA host, may use NFSv4.1 or NFSv3 protocol.
+* Each volume must have at least one export policy defined. To install SAP, root access must be enabled.
+* Kerberos nor LDAP enablement are not supported.
+* You should follow the naming convention outlined in the following table.
+
+The following list describes all the possible volume types for application volume groups for SAP HANA.
+
+| Volume type | Creation limits | Supported Protocol | Recommended naming | Data protection recommendation |
+| - | -- | - | | -- |
+| **SAP HANA data volume** | One data volume must be created for every HANA host. | NFSv4.1 (LDAP nor Kerberos are supported) | `<SID>-data-mnt<00001>`: <ol><li> `<SID>` is the SAP system ID </li><li> `<00001>` refers to host number. For example, in a single-host configuration or for the first host in a multi-host configuration is host number is 00001. The next host is 00002. </li></ol> | No initial data protection recommendation |
+| **SAP HANA log volume** | One log volume bust must be created for every HANA host | NFSv4.1 (LDAP nor Kerberos are supported) | `<SID>-data-mnt<00001>`: <ol><li> `<SID>` is the SAP system ID </li><li> `<00001>` refers to host number. For example, in a single-host configuration or for the first host in a multi-host configuration, the host number is 00001. The next host is 00002 </li></ol> | No initial data protection recommendation |
+| **SAP HANA shared volume** | One shared volume must be created for the first host HANA host of a multiple-host setup, or for a single-host HANA installation. | NFSv4.1 (LDAP nor Kerberos are supported) | `<SID>-shared` where `<SID>` is the SAP system ID | No initial data protection recommended |
+| **SAP HANA data backup volume** | An optional volume created only for the first HANA node | NFSv4.1 (LDAP nor Kerberos are supported) | `<SID>-data-backup` where `<SID>` is the SAP system ID | No initial data protection recommended |
+| **SAP HANA log backup volume** | An optional volume created only for the first HANA node. | NFSv4.1 (LDAP nor Kerberos are supported) | `<SID>-log-backup` where `<SID>` is the SAP system ID | No initial data protection recommended |
+
+## Prepare your environment
+
+1. **Networking:** You need to decide on the networking architecture. To use Azure NetApp Files, a VNet needs to be created and within the vNet a delegated subnet where the ANF storage endpoints (IPs) will be placed. To ensure that the size of this subnet is large enough, see [Considerations about delegating a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md#considerations).
+ 1. Create a VNet.
+ 2. Create a virtual machine (VM) subnet and delegated subnet for ANF.
+1. **Storage Account and Capacity Pool:** A storage account is the entry point to consume Azure NetApp Files. At least one storage account needs to be created. Within a storage account, a capacity pool is the logical unit to create volumes. Application volume groups require a capacity pool with a manual QoS. It should be created with a size and service level that meets your HANA requirements.
+ >[!NOTE]
+ > A capacity pool can be resized at any time. For more information about changing a capacity pool, refer to [Manage a manual QoS capacity pool](manage-manual-qos-capacity-pool.md).
+ 1. Create a NetApp storage account.
+ 2. Create a manual QoS capacity pool.
+1. **Create AvSet and proximity placement group (PPG):** For production landscapes, you should create an AvSet that is manually pinned to a data center where Azure NetApp Files resources are available in proximity. The AvSet pinning ensures that VMs will not be moved on restart. The proximity placement group (PPG) needs to be assigned to the AvSet. With the help of application volume groups, the PPG can find the closest Azure NetApp Files hardware. For more information, see [Best practices about proximity placement groups](application-volume-group-considerations.md#best-practices-about-proximity-placement-groups).
+ 1. Create AvSet.
+ 2. Create PPG.
+ 3. Assign PPG to AvSet.
+1. **Manual Steps - Request AvSet pinning**: AvSet pinning is required for long term SAP HANA systems. The Microsoft capacity planning team ensures that the required VMs for SAP HANA and Azure NetApp Files resources be in proximity to the VMs that are available. VMs will not move on restart.
+ * Request pinning using [this form](https://aka.ms/HANAPINNING).
+1. **Create and start HANA DB VM:** Before you can create volumes using application volume groups, the PPG must be anchored. At least one VM must be created using the pinned AvSet. Once this VM is started, the PPG can be used to detect where the VM is running.
+ 1. Create and start the VM using the AvSet.
+
+## Understand application volume group REST API parameters
+
+The following tables describe the generic application volume group creation using the REST API, detailing selected parameters and properties required for SAP HANA application volume group creation. Constraints and typical values for SAP HANA AVG creation are also specified where applicable.
+
+### Application volume group create
+
+In a create request, use the following URI format:
+```rest
+/subscriptions/<subscriptionId/providers/Microsoft.NetApp/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.NetApp/netAppAccounts/<accountName>/volumeGroups/<volumeGroupName>?api-version=<apiVersion>
+```
+
+| URI parameter | Description | Restrictions for SAP HANA |
+| - | -- | -- |
+| `subscriptionId` | Subscription ID | None |
+| `resourceGroupName` | Resource group name | None |
+| `accountName` | NetApp account name | None |
+| `volumeGroupName` | Volume group name | None. The recommended format is `<SID>-<Name>-<ID>` <ol><li> `SID`: HANA System ID. </li><li>Name: A string of your choosing</li><li>ID: Five-digit HANA Host ID</li><ol> Example: `SH9-Testing-00003` |
+| `apiVersion` | API version | Must be `2022-03-01` or later |
+
+### Request body
+
+The request body consists of the _outer_ parameters, the group properties, and an array of volumes to be created, each with their individual outer parameters and volume properties.
+
+The following table describes the request body parameters and group level properties required to create a SAP HANA application volume group.
+
+| URI parameter | Description | Restrictions for SAP HANA |
+| - | -- | -- |
+| `Location` | Region in which to create the application volume group | None |
+| **GROUP PROPERTIES** | | |
+| `groupDescription` | Description for the group | Free-form string |
+| `applicationType` | Application type | Must be "SAP-HANA" |
+| `applicationIdentifier` | Application specific identifier string, following application naming rules | The SAP System ID, which should follow aforementioned naming rules, for example `SH9` |
+| `deploymentSpecId` | Deployment specification identifier defining the rules to deploy the specific application volume group type | Must be: ΓÇ£20542149-bfca-5618-1879-9863dc6767f1ΓÇ¥ |
+| `volumes` | Array of volumes to be created (see the next table for volume-granular details) | Volume count depends upon host configuration: <ul><li>Single-host (3-5 volumes)</li><li>**Required**: _data_, _log_ and _shared_. **Optional**: _data-backup_, _log-backup_ </li><li> Multiple-Host (two volumes)
+Required: _data_ and _log_.</li><ul> |
+
+This table describes the request body parameters and volume properties for creating a volume in a SAP HANA application volume group.
+
+| Volume-level request parameter | Description | Restrictions for SAP HANA |
+| - | -- | -- |
+| `name` | Volume name | None. Examples or recommended volume names: <ul><li> `SH9-data-mnt00001` data for Single-Host.</li><li> `SH9-log-backup` log-backup for Single-Host.</li><li> `HSR-SH9-shared` shared for HSR Secondary.</li><li> `DR-SH9-data-backup` data-backup for CRR destination </li><li> `DR2-SH9-data-backup` data-backup for CRR destination of HSR Secondary.</li></ul> |
+| `tags` | Volume tags | None, however, it may be helpful to add a tag to the HSR partner volume to identify the corresponding HSR partner volume. The Azure portal suggests the following tag for the HSR Secondary volumes: <ul><li> **Name**: `HSRPartnerStorageResourceId` </li><li> **Value:** `<Partner volume Id>` </li></ul> |
+| **Volume properties** | **Description** | **SAP HANA Value Restrictions** |
+| `creationToken` | Export path name, typically same as name above. | None. Example: `SH9-data-mnt00001` |
+| `throughputMibps` | QoS throughput | This must be between 1 Mbps and 4500 Mbps. You should set throughput based on volume type. |
+| `usageThreshhold` | Size of the volume in bytes. This must be in the 100 GiB to 100 TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. |
+| `exportPolicyRule` | Volume export policy rule | At least one export policy rule must be specified for SAP HANA. Only the following rules values can be modified for SAP HANA, the rest _must_ have their default values: <ul><li>`unixReadOnly`: should be false</li><li>`unixReadWrite`: should be true</li><li>`allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions.</li><li>`hasRootAccess`: must be true to install SAP.</li><li>`chownMode`: Specify `chown` mode.</li><li>`nfsv41`: true for data, log, and shared volumes, optionally true for data backup and log backup volumes</li><li>`nfsv3`: optionally true for data backup and log backup volumes</li><ul> All other rule values _must_ be left defaulted. |
+| `volumeSpecName` | Specifies the type of volume for the application volume group being created | SAP HANA volumes must have a value that is one of the following: <ul><li>"data"</li><li>"log"</li><li>"shared"</li><li>"data-backup"</li><li>"log-backup"</li></ul> |
+| `proximityPlacementGroup` | Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. | <ul><li>The ΓÇ£dataΓÇ¥, ΓÇ£logΓÇ¥ and ΓÇ£sharedΓÇ¥ volumes must each have a PPG specified, preferably a common PPG.</li><li>A PPG must be specified for the ΓÇ£data-backupΓÇ¥ and ΓÇ£log-backupΓÇ¥ volumes, but it will be ignored during placement.</li></ul> |
+| `subnetId` | Delegated subnet ID for Azure NetApp Files. | In a normal case where there are sufficient resources available, the number of IP addresses required in the subnet depends on the order of the application volume group created in the subscription: <ol><li> First application volume group created: the creation usually requires to 3-4 IP addresses but can require up to 5</li><li> Second application volume group created: Normally requires two IP addresses</li><li></li>Third and subsequent application volume group created: Normally, more IP addresses will not be required</ol> |
+| `capacityPoolResourceId` | ID of the capacity pool | The capacity pool must be of type manual QoS. Generally, all SAP volumes are placed in a common capacity pool, however this is not a requirement. |
+| `protocolTypes` | Protocol to use | This should be either NFSv3 or NFSv4.1 and should match the protocol specified in the Export Policy Rule described earlier in this table. |
+
+## Example API request content: application volume group creation
+
+The examples in this section illustrate the values passed in the volume group creation request for various SAP HANA configurations. The examples demonstrate best practices for naming, sizing, and values as described in the tables.
+
+In the examples below, selected placeholders are specified and should be replaced by the desired values, these include:
+1. `<SubscriptionId>`: Subscription ID. Example: `11111111-2222-3333-4444-555555555555`
+2. `<ResourceGroup>`: Resource group. Example: `TestResourceGroup`
+3. `<NtapAccount>`: NetApp account, for example: `TestAccount`
+4. `<VolumeGroupName>`: Volume group name, for example: `SH9-Test-00001`
+5. `<SubnetId>`: Subnet resource ID, for example: `/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/myRP/providers/Microsoft.Network/virtualNetworks/testvnet3/subnets/SH9_Subnet`
+6. `<CapacityPoolResourceId>`: Capacity pool resource ID, for example: `/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/myRG/providers/Microsoft.NetApp/netAppAccounts/account1/capacityPools/SH9_Pool`
+7. `<ProximityPlacementGroupResourceId>`: Proximity placement group, for example: `/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/test/providers/Microsoft.Compute/proximityPlacementGroups/SH9_PPG`
+8. `<PartnerVolumeId>`: Partner volume ID (for HSR volumes).
+9. `<ExampleJson>`: JSON Request from one of the examples in the API request tables below.
++
+>[!NOTE]
+> The following samples use jq, a tool that helps format the JSON output in a user-friendly way. If you don't have or use jq, you should omit the `| jq xxx` snippets.
+
+## Creating SAP HANA volume groups using curl
+
+SAP HANA volume groups for the following examples can be created using a sample shell script that calls the API using curl:
+
+1. Extract the subscription ID. This will automate the extraction of the subscription ID and generate the authorization token:
+ ```bash
+ subId=$(az account list | jq ".[] | select (.name == \"Pay-As-You-Go\") | .id" -r)
+ echo "Subscription ID: $subId"
+ ```
+1. Create the access token:
+ ```bash
+ response=$(az account get-access-token)
+ token=$(echo $response | jq ".accessToken" -r)
+ echo "Token: $token"
+ ```
+1. Call the REST API using curl
+ ```bash
+ echo ""
+ curl -X PUT -H "Authorization: Bearer $token" -H "Content-Type:application/json" -H "Accept:application/json" -d @<ExampleJson> https://management.azure.com/subscriptions/$subId/resourceGroups/<ResourceGroup>/providers/Microsoft.NetApp/netAppAccounts/<NtapAccount>/volumeGroups/<VolumeGroupName>?api-version=2022-03-01 | jq .
+ ```
+
+### Example 1: Deploy volumes for the first HANA host for a single-host or multi-host configuration
+To create the five volumes (data, log, shared, data-backup, log-backup) for a single-node SAP HANA system with SID `SH9` as in the example, use the following API request as shown in the JSON example.
+
+>[!NOTE]
+>You need to replace the placeholders and adapt the parameters to meet your requirements.
+
+#### Example single-host SAP HANA application volume group creation Request
+
+This example pertains to data, log, shared, data-backup, and log-backup volumes demonstrating best practices for naming, sizing, and throughputs. This example will serve as the primary volume if you're configuring an HSR pair.
+
+1. Save the JSON template as `sh9.json`:
+ ```json
+ {
+ "location": "westus",
+ "properties": {
+ "groupMetaData": {
+ "groupDescription": "Test group for SH9",
+ "applicationType": "SAP-HANA",
+ "applicationIdentifier": "SH9",
+ "deploymentSpecId": "20542149-bfca-5618-1879-9863dc6767f1"
+ },
+ "volumes": [
+ {
+ "name": "SH9-data-mnt00001",
+ "properties": {
+ "creationToken": "SH9-data-mnt00001",
+ "serviceLevel": "premium",
+ "throughputMibps": 400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "data",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ },
+ {
+ "name": "SH9-log-mnt00001",
+ "properties": {
+ "creationToken": "SH9-log-mnt00001",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "log",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ },
+ {
+ "name": "SH9-shared",
+ "properties": {
+ "creationToken": "SH9-shared",
+ "serviceLevel": "premium",
+ "throughputMibps": 64,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 1099511627776,
+ "volumeSpecName": "shared",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ },
+ {
+ "name": "SH9-data-backup",
+ "properties": {
+ "creationToken": "SH9-data-backup",
+ "serviceLevel": "premium",
+ "throughputMibps": 128,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 214748364800,
+ "volumeSpecName": "data-backup",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ },
+ {
+ "name": "SH9-log-backup",
+ "properties": {
+ "creationToken": "SH9-log-backup",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 549755813888,
+ "volumeSpecName": "log-backup",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ }
+ ]
+ }
+ }
+ ```
+1. Extract the subscription ID:
+ ```bash
+ subId=$(az account list | jq ".[] | select (.name == \"Pay-As-You-Go\") | .id" -r)
+ echo "Subscription ID: $subId"
+ ```
+1. Create the access token:
+ ```bash
+ response=$(az account get-access-token)
+ token=$(echo $response | jq ".accessToken" -r)
+ echo "Token: $token"
+ ```
+3. Call the REST API using curl
+ ```bash
+ echo ""
+ curl -X PUT -H "Authorization: Bearer $token" -H "Content-Type:application/json" -H "Accept:application/json" -d @sh9.json https://management.azure.com/subscriptions/$subId/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/volumeGroups/SAP-HANA-SH9-00001?api-version=2022-03-01 | jq .
+ ```
+1. Sample result:
+```json
+{
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/volumeGroups/SAP-HANA-SH9-00001",
+ "name": "ANF-WestUS-test/SAP-HANA-SH9-00001",
+ "type": "Microsoft.NetApp/netAppAccounts/volumeGroups",
+ "location": "westus",
+ "properties": {
+ "provisioningState": "Creating",
+ "groupMetaData": {
+ "groupDescription": "Test group for SH9",
+ "applicationType": "SAP-HANA",
+ "applicationIdentifier": "SH9",
+ "deploymentSpecId": "20542149-bfca-5618-1879-9863dc6767f1",
+ "volumesCount": 0
+ },
+ "volumes": [
+ {
+ "name": "SH9-data-mnt00001",
+ "properties": {
+ "serviceLevel": "premium",
+ "creationToken": "SH9-data-mnt00001",
+ "usageThreshold": 107374182400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Network/virtualNetworks/rg-westus-vnet/subnets/default",
+ "throughputMibps": 1,
+ "capacityPoolResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/capacityPools/avg",
+ "proximityPlacementGroup": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Compute/proximityPlacementGroups/ppg-westus-test",
+ "volumeSpecName": "data",
+ "maximumNumberOfFiles": 100000000
+ }
+ },
+ {
+ "name": "SH9-log-mnt00001",
+ "properties": {
+ "serviceLevel": "premium",
+ "creationToken": "SH9-log-mnt00001",
+ "usageThreshold": 107374182400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Network/virtualNetworks/rg-westus-vnet/subnets/default",
+ "throughputMibps": 1,
+ "capacityPoolResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/capacityPools/avg",
+ "proximityPlacementGroup": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Compute/proximityPlacementGroups/ppg-westus-test",
+ "volumeSpecName": "log",
+ "maximumNumberOfFiles": 100000000
+ }
+ },
+ {
+ "name": "SH9-shared",
+ "properties": {
+ "serviceLevel": "premium",
+ "creationToken": "SH9-shared",
+ "usageThreshold": 107374182400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Network/virtualNetworks/rg-westus-vnet/subnets/default",
+ "throughputMibps": 1,
+ "capacityPoolResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/capacityPools/avg",
+ "proximityPlacementGroup": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Compute/proximityPlacementGroups/ppg-westus-test",
+ "volumeSpecName": "shared",
+ "maximumNumberOfFiles": 100000000
+ }
+ },
+ {
+ "name": "SH9-data-backup",
+ "properties": {
+ "serviceLevel": "premium",
+ "creationToken": "SH9-data-backup",
+ "usageThreshold": 107374182400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Network/virtualNetworks/rg-westus-vnet/subnets/default",
+ "throughputMibps": 1,
+ "capacityPoolResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/capacityPools/avg",
+ "proximityPlacementGroup": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Compute/proximityPlacementGroups/ppg-westus-test",
+ "volumeSpecName": "data-backup",
+ "maximumNumberOfFiles": 100000000
+ }
+ },
+ {
+ "name": "SH9-log-backup",
+ "properties": {
+ "serviceLevel": "premium",
+ "creationToken": "SH9-log-backup",
+ "usageThreshold": 107374182400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Network/virtualNetworks/rg-westus-vnet/subnets/default",
+ "throughputMibps": 1,
+ "capacityPoolResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.NetApp/netAppAccounts/ANF-WestUS-test/capacityPools/avg",
+ "proximityPlacementGroup": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-westus/providers/Microsoft.Compute/proximityPlacementGroups/ppg-westus-test",
+ "volumeSpecName": "log-backup",
+ "maximumNumberOfFiles": 100000000
+ }
+ }
+ ]
+ }
+}
+```
+
+### Example 2: Deploy volumes for an additional HANA Host for a multiple-host HANA configuration
+
+To create a multiple-host HANA system, you need to add additional hosts to the previously deployed HANA hosts. Additional hosts only require a data and log volume each host you add. In this example, a volume group is added for host number `00002`.
+
+This example is similar to the single-host system request in the earlier example, except it only contains the data and log volumes.
+
+```json
+{
+ "location": "westus",
+ "properties": {
+ "groupMetaData": {
+ "groupDescription": "Test group for SH9, host #2",
+ "applicationType": "SAP-HANA",
+ "applicationIdentifier": "SH9",
+ "deploymentSpecId": "20542149-bfca-5618-1879-9863dc6767f1"
+ },
+ "volumes": [
+ {
+ "name": "SH9-data-mnt00002",
+ "properties": {
+ "creationToken": "SH9-data-mnt00002",
+ "serviceLevel": "premium",
+ "throughputMibps": 400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "data",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ },
+ {
+ "name": "SH9-log-mnt00002",
+ "properties": {
+ "creationToken": "SH9-log-mnt00002",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "log",
+ "capacityPoolResourceId": <CapacityPoolResourceId>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId>
+ }
+ }
+ ]
+ }
+```
+
+### Example 3: Deploy volumes for a secondary HANA system using HANA system replication
+
+HANA System Replication (HSR) will be used to set up a HANA database where both databases are using the same SAP System Identifier (SID) but have their individual volumes. Typically, HSR setups are in different zones and therefore require different proximity placement groups.
+
+Volumes for a secondary database need to have different volume names. In this example, a volume is created for the secondary HANA system that has HSR with the single-host HANA system as a primary HSR (similar to what is described in example one).
+
+It's recommended that you:
+1. Use the same volume names as the primary volumes using the prefix `HSR-`.
+1. Add Azure tags to the volumes to identify the corresponding primary volumes:
+ * Name: `HSRPartnerStorageResourceId`
+ * Value: `<Partner Volume ID>`
+
+This example encompasses the creation of data, log, shared, data-backup, and log-backup volumes, demonstrating best practices for naming, sizing, and throughputs.
+
+```json
+{
+ "location": "westus",
+ "properties": {
+ "groupMetaData": {
+ "groupDescription": "HSR Secondary: Test group for SH9",
+ "applicationType": "SAP-HANA",
+ "applicationIdentifier": "SH9",
+ "deploymentSpecId": "20542149-bfca-5618-1879-9863dc6767f1"
+ },
+ "volumes": [
+ {
+ "name": "HSR-SH9-data-mnt00001",
+ "tags": {"HSRPartnerStorageResourceId": "<PartnerVolumeId>"},
+ "properties": {
+ "creationToken": "HSR-SH9-data-mnt00001",
+ "serviceLevel": "premium",
+ "throughputMibps": 400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "data",
+ "capacityPoolResourceId": <CapacityPoolResourceId2>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId2>
+ }
+ },
+ {
+ "name": "HSR-SH9-log-mnt00001",
+ "tags": {"HSRPartnerStorageResourceId": "<PartnerVolumeId>"},
+ "properties": {
+ "creationToken": "HSR-SH9-log-mnt00001",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "log",
+ "capacityPoolResourceId": <CapacityPoolResourceId2>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId2>
+ }
+ },
+ {
+ "name": "HSR-SH9-shared",
+ "tags": {"HSRPartnerStorageResourceId": "<PartnerVolumeId>"},
+ "properties": {
+ "creationToken": "HSR-SH9-shared",
+ "serviceLevel": "premium",
+ "throughputMibps": 64,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 1099511627776,
+ "volumeSpecName": "shared",
+ "capacityPoolResourceId": <CapacityPoolResourceId2>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId2>
+ }
+ },
+ {
+ "name": "HSR-SH9-data-backup",
+ "tags": {"HSRPartnerStorageResourceId": "<PartnerVolumeId>"},
+ "properties": {
+ "creationToken": "HSR-SH9-data-backup",
+ "serviceLevel": "premium",
+ "throughputMibps": 128,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 214748364800,
+ "volumeSpecName": "data-backup",
+ "capacityPoolResourceId": <CapacityPoolResourceId2>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId2>
+ }
+ },
+ {
+ "name": "HSR-SH9-log-backup",
+ "tags": {"HSRPartnerStorageResourceId": "<PartnerVolumeId>"},
+ "properties": {
+ "creationToken": "HSR-SH9-log-backup",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 549755813888,
+ "volumeSpecName": "log-backup",
+ "capacityPoolResourceId": <CapacityPoolResourceId2>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId2>
+ }
+ }
+ ]
+ }
+}
+```
+
+### Example 4: Deploy volumes for a secondary HANA system using HANA system replication
+
+Cross-region replication is one way to set up a disaster recovery configuration for HANA, where the volumes of the HANA database in the DR-region are replicated on the storage side using cross-region replication in contrast to HSR, which replicates at the application level where it requires to have the HANA VMs deployed and running. Refer to the documentation (link) to understand which volumes require CRR replication. Refer to [Add volumes for an SAP HANA system as a DR system using cross-region replication](application-volume-group-disaster-recovery.md) to understand for which volumes in cross-region replication relations are required (data, shared, log-backup), not allowed (log), or optional (data-backup).
+
+In this example, the following placeholders are specified and should be replaced by values specific to your configuration:
+1. `<CapacityPoolResourceId3>`: DR capacity pool resource ID, for example:
+`/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/myRG/providers/Microsoft.NetApp/netAppAccounts/account1/capacityPools/DR_SH9_HSR_Pool`
+2. `<ProximityPlacementGroupResourceId3>`: DR proximity placement group, for example:`/subscriptions/11111111-2222-3333-4444-555555555555/resourceGroups/test/providers/Microsoft.Compute/proximityPlacementGroups/DR_SH9_PPG`
+3. `<SrcVolumeId_data>`, `<SrcVolumeId_shared>`, `<SrcVolumeId_data-backup>`, `<SrcVolumeId_log-backup>`: cross-region replication source volume IDs for the data, log, shared, and log-backup cross-region replication destination volumes.
+
+```json
+{
+ "location": "eastus",
+ "properties": {
+ "groupMetaData": {
+ "groupDescription": "Data Protection: Test group for SH9",
+ "applicationType": "SAP-HANA",
+ "applicationIdentifier": "SH9",
+ "deploymentSpecId": "20542149-bfca-5618-1879-9863dc6767f1"
+ },
+ "volumes": [
+ {
+ "name": "DR-SH9-data-mnt00001",
+ "properties": {
+ "creationToken": "DR-SH9-data-mnt00001",
+ "serviceLevel": "premium",
+ "throughputMibps": 400,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "data",
+ "capacityPoolResourceId": <CapacityPoolResourceId3>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId3>,
+ "volumeType": "DataProtection",
+ "dataProtection": {
+ "replication": {
+ "endpointType": "dst",
+ "remoteVolumeResourceId": <SrcVolumeId_data>,
+ "replicationSchedule": "hourly"
+ }
+ }
+ }
+ },
+ {
+ "name": "DR-SH9-log-mnt00001",
+ "properties": {
+ "creationToken": "DR-SH9-log-mnt00001",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 107374182400,
+ "volumeSpecName": "log",
+ "capacityPoolResourceId": <CapacityPoolResourceId3>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId3>,
+ }
+ },
+ {
+ "name": "DR-SH9-shared",
+ "properties": {
+ "creationToken": "DR-SH9-shared",
+ "serviceLevel": "premium",
+ "throughputMibps": 64,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 1099511627776,
+ "volumeSpecName": "shared",
+ "capacityPoolResourceId": <CapacityPoolResourceId3>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId3>,
+ "volumeType": "DataProtection",
+ "dataProtection": {
+ "replication": {
+ "endpointType": "dst",
+ "remoteVolumeResourceId": <SrcVolumeId_shared>,
+ "replicationSchedule": "hourly"
+ }
+ }
+ }
+ },
+ {
+ "name": "DR-SH9-data-backup",
+ "properties": {
+ "creationToken": "DR-SH9-data-backup",
+ "serviceLevel": "premium",
+ "throughputMibps": 128,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 214748364800,
+ "volumeSpecName": "data-backup",
+ "capacityPoolResourceId": <CapacityPoolResourceId3>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId3>,
+ "volumeType": "DataProtection",
+ "dataProtection": {
+ "replication": {
+ "endpointType": "dst",
+ "remoteVolumeResourceId": <SrcVolumeId_data-backup>,
+ "replicationSchedule": "daily"
+ }
+ }
+ }
+ },
+ {
+ "name": "DR-SH9-log-backup",
+ "properties": {
+ "creationToken": "DR-SH9-log-backup",
+ "serviceLevel": "premium",
+ "throughputMibps": 250,
+ "exportPolicy": {
+ "rules": [
+ {
+ "ruleIndex": 1,
+ "unixReadOnly": false,
+ "unixReadWrite": true,
+ "kerberos5ReadOnly": false,
+ "kerberos5ReadWrite": false,
+ "kerberos5iReadOnly": false,
+ "kerberos5iReadWrite": false,
+ "kerberos5pReadOnly": false,
+ "kerberos5pReadWrite": false,
+ "cifs": false,
+ "nfsv3": false,
+ "nfsv41": true,
+ "allowedClients": "0.0.0.0/0",
+ "hasRootAccess": true
+ }
+ ]
+ },
+ "protocolTypes": [
+ "NFSv4.1"
+ ],
+ "subnetId": <SubnetId>,
+ "usageThreshold": 549755813888,
+ "volumeSpecName": "log-backup",
+ "capacityPoolResourceId": <CapacityPoolResourceId3>,
+ "proximityPlacementGroup": <ProximityPlacementGroupResourceId3>,
+ "volumeType": "DataProtection",
+ "dataProtection": {
+ "replication": {
+ "endpointType": "dst",
+ "remoteVolumeResourceId": <SrcVolumeId_log-backup>,
+ "replicationSchedule": "_10minutely"
+ }
+ }
+ }
+ }
+ ]
+ }
+}
+```
+
+## Next steps
+
+* [Deploy the first SAP HANA host using application volume group for SAP HANA](application-volume-group-deploy-first-host.md)
+* [Add hosts to a multiple-host SAP HANA system using application volume group for SAP HANA](application-volume-group-add-hosts.md)
+* [Add volumes for an SAP HANA system as a secondary database in HSR](application-volume-group-add-volume-secondary.md)
+* [Add volumes for an SAP HANA system as a DR system using cross-region replication](application-volume-group-disaster-recovery.md).
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory
`KerberosEncryptionType` is a multivalued parameter that supports AES-128 and AES-256 values.
+ For more information, refer to the [Set-ADUser documentation](/powershell/module/activedirectory/set-aduser).
+ * If you have a requirement to enable and disable certain Kerberos encryption types for Active Directory computer accounts for domain-joined Windows hosts used with Azure NetApp Files, you must use the Group Policy `Network Security: Configure Encryption types allowed for Kerberos`. Do not set the registry key `HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\System\Kerberos\Parameters\SupportedEncryptionTypes`. Doing this will break Kerberos authentication with Azure NetApp Files for the Windows host where this registry key was manually set.
Several features of Azure NetApp Files require that you have an Active Directory
For more information, refer to [Network security: Configure encryption types allowed for Kerberos](/windows/security/threat-protection/security-policy-settings/network-security-configure-encryption-types-allowed-for-kerberos) or [Windows Configurations for Kerberos Supported Encryption Types](/archive/blogs/openspecification/windows-configurations-for-kerberos-supported-encryption-type)
-* For more information, refer to the [Set-ADUser documentation](/powershell/module/activedirectory/set-aduser).
- ## Create an Active Directory connection 1. From your NetApp account, select **Active Directory connections**, then select **Join**.
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md
na Previously updated : 09/14/2022 Last updated : 09/28/2022
Azure NetApp Files volume replication is supported between various [Azure region
| US Government | US Gov Arizona | US Gov Virginia | >[!NOTE]
->There may be a discrepancy in the size of snapshots between source and destination. This discrepancy is expected. To learn more about snapshots, refer to [How Azure NetApp Files snapshots work](snapshots-introduction.md).
+>There may be a discrepancy in the size and number of snapshots between source and destination. This discrepancy is expected. Snapshot policies and replication schedules will influence the number of snapshots. Snapshot policies and replication schedules, combined with the amount of data changed between snapshots, will influence the size of snapshots. To learn more about snapshots, refer to [How Azure NetApp Files snapshots work](snapshots-introduction.md).
## Service-level objectives
azure-netapp-files Performance Linux Nfs Read Ahead https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-nfs-read-ahead.md
na Previously updated : 02/02/2022 Last updated : 09/29/2022 # Linux NFS read-ahead best practices for Azure NetApp Files
To persistently set read-ahead for NFS mounts, `udev` rules can be written as fo
1. Create and test `/etc/udev/rules.d/99-nfs.rules`:
- `SUBSYSTEM=="bdi", ACTION=="add", PROGRAM="/bin/awk -v bdi=$kernel 'BEGIN{ret=1} {if ($4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes", ATTR{read_ahead_kb}="15380"`
+ `SUBSYSTEM=="bdi", ACTION=="add", PROGRAM="<absolute_path>/awk -v bdi=$kernel 'BEGIN{ret=1} {if ($4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes", ATTR{read_ahead_kb}="15380"`
2. Apply the `udev` rule:
azure-resource-manager Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md
Title: Create Bicep files by using Visual Studio Code description: Describes how to create Bicep files by using Visual Studio Code Previously updated : 09/14/2022 Last updated : 09/29/2022 # Create Bicep files by using Visual Studio Code
This command creates a parameter file in the same folder as the Bicep file. The
The `insert resource` command adds a resource declaration in the Bicep file by providing the resource ID of an existing resource. After you select **Insert Resource**, enter the resource ID in the command palette. It takes a few moments to insert the resource.
-You can find the resource ID from the Azure portal, or by using:
+You can find the resource ID by using one of these methods:
-# [CLI](#tab/CLI)
+- Use [Azure Resource extension for VSCode](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureresourcegroups).
-```azurecli
-az resource list
-```
+ :::image type="content" source="./media/visual-studio-code/visual-studio-code-azure-resources-extension.png" alt-text="Screenshot of Visual Studio Code Azure Resources extension.":::
-# [PowerShell](#tab/PowerShell)
+- Use the [Azure portal](https://portal.azure.com).
+- Use Azure CLI or Azure PowerShell:
-```azurepowershell
-Get-AzResource
-```
+ # [CLI](#tab/CLI)
-
+ ```azurecli
+ az resource list
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ Get-AzResource
+ ```
+
+
Similar to exporting templates, the process tries to create a usable resource. However, most of the inserted resources require some modification before they can be used to deploy Azure resources.
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
Previously updated : 08/22/2022 Last updated : 09/29/2022 # Quickstart: Create and publish an Azure Managed Application definition
az storage account create \
--location eastus \ --sku Standard_LRS \ --kind StorageV2
+```
+
+After you create the storage account, add the role assignment _Storage Blob Data Contributor_ to the storage account scope. Assign access to your Azure Active Directory user account. Depending on your access level in Azure, you might need other permissions assigned by your administrator. For more information, see [Assign an Azure role for access to blob data](../../storage/blobs/assign-azure-role-data-access.md).
+
+After you add the role to the storage account, it takes a few minutes to become active in Azure. You can then use the parameter `--auth-mode login` in the commands to create the container and upload the file.
+```azurecli-interactive
az storage container create \ --account-name demostorageaccount \ --name appcontainer \
+ --auth-mode login \
--public-access blob az storage blob upload \ --account-name demostorageaccount \ --container-name appcontainer \
+ --auth-mode login \
--name "app.zip" \ --file "./app.zip"- ```
-When you run the Azure CLI command to create the container, you might see a warning message about credentials, but the command will be successful. The reason is because although you own the storage account you assign roles like _Storage Blob Data Contributor_ to the storage account scope. For more information, see [Assign an Azure role for access to blob data](../../storage/blobs/assign-azure-role-data-access.md). After you add a role, it takes a few minutes to become active in Azure. You can then append the command with `--auth-mode login` and resolve the warning message.
+For more information about storage authentication, see [Choose how to authorize access to blob data with Azure CLI](../../storage/blobs/authorize-data-operations-cli.md).
In this section you'll get identity information from Azure Active Directory, cre
### Create an Azure Active Directory user group or application
-The next step is to select a user group, user, or application for managing the resources for the customer. This identity has permissions on the managed resource group according to the role that is assigned. The role can be any Azure built-in role like Owner or Contributor. To create a new Active Directory user group, see [Create a group and add members in Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
+The next step is to select a user group, user, or application for managing the resources for the customer. This identity has permissions on the managed resource group according to the role that's assigned. The role can be any Azure built-in role like Owner or Contributor. To create a new Active Directory user group, see [Create a group and add members in Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
-You need the object ID of the user group to use for managing the resources.
+This example uses a user group, so you need the object ID of the user group to use for managing the resources. Replace the placeholder `mygroup` with your group's name.
# [PowerShell](#tab/azure-powershell)
az group create --name appDefinitionGroup --location westcentralus
Create the managed application definition resource. In the `Name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name.
+The `blob` command that's run from Azure PowerShell or Azure CLI creates a variable that's used to get the URL for the package _.zip_ file. That variable is used in the command that creates the managed application definition.
+ # [PowerShell](#tab/azure-powershell) ```azurepowershell-interactive
New-AzManagedApplicationDefinition `
blob=$(az storage blob url \ --account-name demostorageaccount \ --container-name appcontainer \
+ --auth-mode login \
--name app.zip --output tsv) az managedapp definition create \
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
See [Monitoring Azure Video Indexer](monitor-video-indexer.md) for details on co
## Metrics
-Azure Video Indexer currently does not support any monitoring on metrics.
+Azure Video Indexer currently does not support any metrics monitoring.
+ <!-- REQUIRED if you support Metrics. If you don't, keep the section but call that out. Some services are only onboarded to logs. <!-- Please keep headings in this order -->
For more information, see a list of [all platform metrics supported in Azure Mon
## Metric dimensions
-Azure Video Indexer currently does not support any monitoring on metrics.
+Azure Video Indexer currently does not support any metrics monitoring.
<!-- REQUIRED. Please keep headings in this order --> <!-- If you have metrics with dimensions, outline it here. If you have no dimensions, say so. Questions email azmondocs@microsoft.com -->
Resource Provider and Type: [Microsoft.VideoIndexer/accounts](/azure/azure-monit
| Category | Display Name | Additional information | |:|:-|| | VIAudit | Azure Video Indexer Audit Logs | Logs are produced from both the Video Indexer portal and the REST API. |
-| IndexingLogs | Indexing Logs | Azure Video Indexer indexing logs to monitor all files uploads, indexing jobs and Re-indexing when needed. |
+| IndexingLogs | Indexing Logs | Azure Video Indexer indexing logs to monitor all files uploads, indexing and reindexing jobs. |
<!-- --**END Examples** - -->
NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays
| Table | Description | Additional information | |:|:-||
-| [VIAudit](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer)<!-- (S/azure/azure-monitor/reference/tables/viaudit)--> | <!-- description copied from previous link --> Events produced using Azure Video Indexer [portal](https://aka.ms/VIportal) or [REST API](https://aka.ms/vi-dev-portal). | |
-|VIIndexing| Events produced using Azure Video Indexer [upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [Re-indexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs. |
+| [VIAudit](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer)<!-- (S/azure/azure-monitor/reference/tables/viaudit)--> | <!-- description copied from previous link --> Events produced using the Azure Video Indexer [website](https://aka.ms/VIportal) or the [REST API portal](https://aka.ms/vi-dev-portal). | |
+|VIIndexing| Events produced using the Azure Video Indexer [upload](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and [re-index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) APIs. |
<!--| [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics) | <!-- description copied from previous link --> <!--Metric data emitted by Azure services that measure their health and performance. | *TODO other important information about this type | | etc. | | |
backup Backup Azure Arm Vms Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-vms-prepare.md
Title: Back up Azure VMs in a Recovery Services vault description: Describes how to back up Azure VMs in a Recovery Services vault using the Azure Backup Previously updated : 06/01/2021 Last updated : 09/29/2022+++ # Back up Azure VMs in a Recovery Services vault
Modify the storage replication type as follows:
To apply a backup policy to your Azure VMs, follow these steps:
-1. Navigate to Backup center and click **+Backup** from the **Overview** tab.
+1. Go to the Backup center and click **+Backup** from the **Overview** tab.
![Backup button](./media/backup-azure-arm-vms-prepare/backup-button.png)
If you selected to create a new backup policy, fill in the policy settings.
2. In **Backup schedule**, specify when backups should be taken. You can take daily or weekly backups for Azure VMs. 3. In **Instant Restore**, specify how long you want to retain snapshots locally for instant restore. * When you restore, backed up VM disks are copied from storage, across the network to the recovery storage location. With instant restore, you can leverage locally stored snapshots taken during a backup job, without waiting for backup data to be transferred to the vault.
- * You can retain snapshots for instant restore for between one to five days. Two days is the default setting.
+ * You can retain snapshots for instant restore for between one to five days. The default setting is *two days*.
4. In **Retention range**, specify how long you want to keep your daily or weekly backup points. 5. In **Retention of monthly backup point** and **Retention of yearly backup point**, specify whether you want to keep a monthly or yearly backup of your daily or weekly backups. 6. Select **OK** to save the policy.
If you selected to create a new backup policy, fill in the policy settings.
![New backup policy](./media/backup-azure-arm-vms-prepare/new-policy.png) > [!NOTE]
- > Azure Backup doesn't support automatic clock adjustment for daylight-saving changes for Azure VM backups. As time changes occur, modify backup policies manually as required.
+>- Azure Backup doesn't support automatic clock adjustment for daylight-saving changes for Azure VM backups. As time changes occur, modify backup policies manually as required.
+>- If you want hourly backups, then you can configure *Enhanced backup policy*. For more information, see [Back up an Azure VM using Enhanced policy](backup-azure-vms-enhanced-policy.md#create-an-enhanced-policy-and-configure-vm-backup).
## Trigger the initial backup The initial backup will run in accordance with the schedule, but you can run it immediately as follows:
-1. Navigate to Backup center and select the **Backup Instances** menu item.
+1. Go to the Backup center and select the **Backup Instances** menu item.
1. Select **Azure Virtual machines** as the **Datasource type**. Then search for the VM that you have configured for backup. 1. Right-click the relevant row or select the more icon (…), and then click **Backup Now**. 1. In **Backup Now**, use the calendar control to select the last day that the recovery point should be retained. Then select **OK**.
center-sap-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/overview.md
You can use ACSS to deploy the following types of SAP systems:
For existing SAP systems that run on Azure, there's a simple registration experience. You can register the following types of existing SAP systems that run on Azure: - An SAP system that runs on SAP NetWeaver or ABAP stack-- SAP systems that run on SUSE and RHEL Linux operating systems
+- SAP systems that run on Windows, SUSE and RHEL Linux operating systems
- SAP systems that run on HANA, DB2, SQL Server, Oracle, Max DB, or SAP ASE databases ACSS brings services, tools and frameworks together to provide an end-to-end unified experience for deployment and management of SAP workloads on Azure, creating the foundation for you to build innovative solutions for your unique requirements.
After you create a VIS, you can:
- Start and stop the SAP application tier. - Get quality checks and insights about your SAP system. - Monitor your Azure infrastructure metrics for your SAP system resources. For example, the CPU percentage used for ASCS and Application Server VMs, or disk input/output operations per second (IOPS).
+- Analyze the cost of running your SAP System on Azure [VMs, Disks, Loadbalancers]
## Next steps
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 9/19/2022 Last updated : 9/29/2022
The following tables show the Microsoft Security Response Center (MSRC) updates
## September 2022 Guest OS
->[!NOTE]
-
->The September Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the September Guest OS. This list is subject to change.
-
-| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
-| | | | | |
-| Rel 22-09 | [5017315] | Latest Cumulative Update(LCU) | 6.48 | Sep 13, 2022 |
-| Rel 22-09 | [5016618] | IE Cumulative Updates | 2.128, 3.115, 4.108 | Aug 9, 2022 |
-| Rel 22-09 | [5017316] | Latest Cumulative Update(LCU) | 7.16 | Sep 13, 2022 |
-| Rel 22-09 | [5017305] | Latest Cumulative Update(LCU) | 5.72 | Sep 13, 2022 |
-| Rel 22-09 | [5013641] | .NET Framework 3.5 and 4.7.2 Cumulative Update | 6.48 | May 10, 2022 |
-| Rel 22-09 | [5017397] | Servicing Stack Update | 2.128 | Sep 13, 2022 |
-| Rel 22-09 | [5017361] | September '22 Rollup | 2.128 | Sep 13, 2022 |
-| Rel 22-09 | [5013637] | .NET Framework 3.5 Security and Quality Rollup LKG | 2.128 | Sep 13, 2022 |
-| Rel 22-09 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 2.128 | May 10, 2022 |
-| Rel 22-09 | [5016263] | Servicing Stack Update | 3.115 | July 12, 2022 |
-| Rel 22-09 | [5017370] | September '22 Rollup | 3.115 | Sep 13, 2022 |
-| Rel 22-09 | [5013635] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.115 | Sep 13, 2022 |
-| Rel 22-09 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 3.115 | May 10, 2022 |
-| Rel 22-09 | [5017398] | Servicing Stack Update | 4.108 | Sep 13, 2022 |
-| Rel 22-09 | [5017367] | Monthly Rollup | 4.108 | Sep 13, 2022 |
-| Rel 22-09 | [5013638] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.108 | Jun 14, 2022 |
-| Rel 22-09 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 4.108 | May 10, 2022 |
-| Rel 22-09 | [4578013] | OOB Standalone Security Update | 4.108 | Aug 19, 2020 |
-| Rel 22-09 | [5017396] | Servicing Stack Update | 5.72 | Sep 13, 2022 |
-| Rel 22-09 | [4494175] | Microcode | 5.72 | Sep 1, 2020 |
-| Rel 22-09 | 5015896 | Servicing Stack Update | 6.48 | Sep 1, 2020 |
-| Rel 22-09 | [5013626] | .NET Framework 4.8 Security and Quality Rollup LKG | 6.48 | May 10, 2022 |
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 22-09 | [5017315] | Latest Cumulative Update(LCU) | [6.48] | Sep 13, 2022 |
+| Rel 22-09 | [5016618] | IE Cumulative Updates | [2.128], [3.115], [4.108] | Aug 9, 2022 |
+| Rel 22-09 | [5017316] | Latest Cumulative Update(LCU) | [7.16] | Sep 13, 2022 |
+| Rel 22-09 | [5017305] | Latest Cumulative Update(LCU) | [5.72] | Sep 13, 2022 |
+| Rel 22-09 | [5013641] | .NET Framework 3.5 and 4.7.2 Cumulative Update | [6.48] | May 10, 2022 |
+| Rel 22-09 | [5017397] | Servicing Stack Update | [2.128] | Sep 13, 2022 |
+| Rel 22-09 | [5017361] | September '22 Rollup | [2.128] | Sep 13, 2022 |
+| Rel 22-09 | [5013637] | .NET Framework 3.5 Security and Quality Rollup LKG | [2.128] | Sep 13, 2022 |
+| Rel 22-09 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup LKG | [2.128] | May 10, 2022 |
+| Rel 22-09 | [5016263] | Servicing Stack Update | [3.115] | July 12, 2022 |
+| Rel 22-09 | [5017370] | September '22 Rollup | [3.115] | Sep 13, 2022 |
+| Rel 22-09 | [5013635] | .NET Framework 3.5 Security and Quality Rollup LKG | [3.115] | Sep 13, 2022 |
+| Rel 22-09 | [5013642] | .NET Framework 4.6.2 Security and Quality Rollup LKG | [3.115] | May 10, 2022 |
+| Rel 22-09 | [5017398] | Servicing Stack Update | [4.108] | Sep 13, 2022 |
+| Rel 22-09 | [5017367] | Monthly Rollup | [4.108] | Sep 13, 2022 |
+| Rel 22-09 | [5013638] | .NET Framework 3.5 Security and Quality Rollup LKG | [4.108] | Jun 14, 2022 |
+| Rel 22-09 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup LKG | [4.108] | May 10, 2022 |
+| Rel 22-09 | [4578013] | OOB Standalone Security Update | [4.108] | Aug 19, 2020 |
+| Rel 22-09 | [5017396] | Servicing Stack Update | [5.72] | Sep 13, 2022 |
+| Rel 22-09 | [4494175] | Microcode | [5.72] | Sep 1, 2020 |
+| Rel 22-09 | 5015896 | Servicing Stack Update | [6.48] | Sep 1, 2020 |
+| Rel 22-09 | [5013626] | .NET Framework 4.8 Security and Quality Rollup LKG | [6.48] | May 10, 2022 |
[5017315]: https://support.microsoft.com/kb/5017315 [5016618]: https://support.microsoft.com/kb/5016618
The following tables show the Microsoft Security Response Center (MSRC) updates
[4494175]: https://support.microsoft.com/kb/4494175 [5015896]: https://support.microsoft.com/kb/5015896 [5013626]: https://support.microsoft.com/kb/5013626
+[2.128]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.115]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.108]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.72]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.48]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.16]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## August 2022 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 9/02/2022 Last updated : 9/29/2022 # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates +
+###### **September 29, 2022**
+The September Guest OS has released.
+ ###### **September 2, 2022** The August Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.16_202209-01 | September 29, 2022 | Post 7.18 |
| WA-GUEST-OS-7.15_202208-01 | September 2, 2022 | Post 7.17 |
-| WA-GUEST-OS-7.14_202207-01 | August 3, 2022 | Post 7.16 |
-|~~WA-GUEST-OS-7.13_202206-01~| July 11, 2022 | September 2, 2022 |
+|~~WA-GUEST-OS-7.14_202207-01~~| August 3, 2022 | September 29, 2022 |
+|~~WA-GUEST-OS-7.13_202206-01~~| July 11, 2022 | September 2, 2022 |
|~~WA-GUEST-OS-7.12_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-7.11_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-7.10_202203-01~~| March 19, 2022 | May 26, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.48_202209-01 | September 29, 2022 | Post 6.50 |
| WA-GUEST-OS-6.47_202208-01 | September 2, 2022 | Post 6.49 |
-| WA-GUEST-OS-6.46_202207-01 | August 3, 2022 | Post 6.48 |
-|~~WA-GUEST-OS-6.45_202206-01~| July 11, 2022 | September 2, 2022 |
+|~~WA-GUEST-OS-6.46_202207-01~~| August 3, 2022 | September 29, 2022 |
+|~~WA-GUEST-OS-6.45_202206-01~~| July 11, 2022 | September 2, 2022 |
|~~WA-GUEST-OS-6.44_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-6.43_202204-01~~| April 30, 2022 | July 11, 2022 | |~~WA-GUEST-OS-6.42_202203-01~~| March 19, 2022 | May 26, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.72_202209-01 | September 29, 2022 | Post 5.74 |
| WA-GUEST-OS-5.71_202208-01 | September 2, 2022 | Post 5.73 |
-| WA-GUEST-OS-5.70_202207-01 | August 3, 2022 | Post 5.72 |
+|~~WA-GUEST-OS-5.70_202207-01~~| August 3, 2022 | September 29, 2022 |
|~~WA-GUEST-OS-5.69_202206-01~~| July 11, 2022 | September 2, 2022 | |~~WA-GUEST-OS-5.68_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-5.67_202204-01~~| April 30, 2022 | July 11, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.108_202209-01 | September 29, 2022 | Post 4.110 |
| WA-GUEST-OS-4.107_202208-01 | September 2, 2022 | Post 4.109 |
-| WA-GUEST-OS-4.106_202207-02 | August 3, 2022 | Post 4.108 |
+|~~WA-GUEST-OS-4.106_202207-02~~| August 3, 2022 | September 29, 2022 |
|~~WA-GUEST-OS-4.105_202206-02~~| July 11, 2022 | September 2, 2022 | |~~WA-GUEST-OS-4.103_202205-01~~| May 26, 2022 | August 2, 2022 | |~~WA-GUEST-OS-4.102_202204-01~~| April 30, 2022 | July 11, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.115_202209-01 | September 29, 2022 | Post 3.117 |
| WA-GUEST-OS-3.114_202208-01 | September 2, 2022 | Post 3.116 |
-| WA-GUEST-OS-3.113_202207-02 | August 3, 2022 | Post 3.115 |
+|~~WA-GUEST-OS-3.113_202207-02~~| August 3, 2022 | September 29, 2022 |
|~~WA-GUEST-OS-3.112_202206-02~~| July 11, 2022 | September 2, 2022 | |~~WA-GUEST-OS-3.110_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-3.109_202204-01~~| April 30, 2022 | July 11, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.128_202209-01 | September 29, 2022 | Post 2.130 |
| WA-GUEST-OS-2.127_202208-01 | September 2, 2022 | Post 2.129 |
-| WA-GUEST-OS-2.126_202207-02 | August 3, 2022 | Post 2.128 |
+|~~WA-GUEST-OS-2.126_202207-02~~| August 3, 2022 | September 29, 2022 |
|~~WA-GUEST-OS-2.125_202206-02~~| July 11, 2022 | September 2, 2022 | |~~WA-GUEST-OS-2.123_202205-01~~| May 26, 2022 | August 3, 2022 | |~~WA-GUEST-OS-2.122_202204-01~~| April 30, 2022 | July 11, 2022 |
cognitive-services Get Started Build Detector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
In your web browser, navigate to the [Custom Vision web page](https://customvisi
![The new project dialog box has fields for name, description, and domains.](./media/get-started-build-detector/new-project.png)
-1. Enter a name and a description for the project. Then select a Resource Group. If your signed-in account is associated with an Azure account, the Resource Group dropdown will display all of your Azure Resource Groups that include a Custom Vision Service Resource.
+1. Enter a name and a description for the project. Then select your Custom Vision Training Resource. If your signed-in account is associated with an Azure account, the Resource dropdown will display all of your compatible Azure resources.
> [!NOTE]
- > If no resource group is available, please confirm that you have logged into [customvision.ai](https://customvision.ai) with the same account as you used to log into the [Azure portal](https://portal.azure.com/). Also, please confirm you have selected the same "Directory" in the Custom Vision website as the directory in the Azure portal where your Custom Vision resources are located. In both sites, you may select your directory from the drop down account menu at the top right corner of the screen.
+ > If no resource is available, please confirm that you have logged into [customvision.ai](https://customvision.ai) with the same account as you used to log into the [Azure portal](https://portal.azure.com/). Also, please confirm you have selected the same "Directory" in the Custom Vision website as the directory in the Azure portal where your Custom Vision resources are located. In both sites, you may select your directory from the drop down account menu at the top right corner of the screen.
+1. Under
1. Select __Object Detection__ under __Project Types__. 1. Next, select one of the available domains. Each domain optimizes the detector for specific types of images, as described in the following table. You can change the domain later if you want to.
cognitive-services Getting Started Build A Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier.md
In your web browser, navigate to the [Custom Vision web page](https://customvisi
![The new project dialog box has fields for name, description, and domains.](./media/getting-started-build-a-classifier/new-project.png)
-1. Enter a name and a description for the project. Then select a Resource Group. If your signed-in account is associated with an Azure account, the Resource Group dropdown will display all of your Azure Resource Groups that include a Custom Vision Service Resource.
+1. Enter a name and a description for the project. Then select your Custom Vision Training Resource. If your signed-in account is associated with an Azure account, the Resource dropdown will display all of your compatible Azure resources.
> [!NOTE]
- > If no resource group is available, please confirm that you have logged into [customvision.ai](https://customvision.ai) with the same account as you used to log into the [Azure portal](https://portal.azure.com/). Also, please confirm you have selected the same "Directory" in the Custom Vision web portal as the directory in the Azure portal where your Custom Vision resources are located. In both sites, you may select your directory from the drop down account menu at the top right corner of the screen.
+ > If no resource is available, please confirm that you have logged into [customvision.ai](https://customvision.ai) with the same account as you used to log into the [Azure portal](https://portal.azure.com/). Also, please confirm you have selected the same "Directory" in the Custom Vision website as the directory in the Azure portal where your Custom Vision resources are located. In both sites, you may select your directory from the drop down account menu at the top right corner of the screen.
1. Select __Classification__ under __Project Types__. Then, under __Classification Types__, choose either **Multilabel** or **Multiclass**, depending on your use case. Multilabel classification applies any number of your tags to an image (zero or more), while multiclass classification sorts images into single categories (every image you submit will be sorted into the most likely tag). You'll be able to change the classification type later if you want to.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Each prebuilt neural voice supports a specific language and dialect, identified
> [!IMPORTANT] > Pricing varies for Prebuilt Neural Voice (see *Neural* on the pricing page) and Custom Neural Voice (see *Custom Neural* on the pricing page). For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page.
-Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing.
+Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
Please note that the following neural voices are retired.
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/long-audio-api.md
We support flexible audio output formats. You can generate audio outputs per par
> [!NOTE] > The default audio format is riff-16khz-16bit-mono-pcm.
+>
+> The sample rate for long audio voices is 24kHz, not 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
* riff-8khz-16bit-mono-pcm * riff-16khz-16bit-mono-pcm
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
See below for information about changes to Speech services and resources.
* Speech SDK 1.23.0 and Speech CLI 1.23.0 were released in July 2022. See details below. * Custom speech-to-text container v3.1.0 released in March 2022, with support to get display models.
-* TTS Service July 2022, new voices in Public Preview and new viseme feature blend shapes were released. See details below.
+* TTS Service August 2022, five new voices in public preview were released.
+* TTS Service September 2022, all the prebuilt neural voices have been upgraded to high-fidelity voices with 48kHz sample rate.
## Release notes
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
If the HTTP status is `200 OK`, the body of the response contains an audio file
## Audio outputs
-The supported streaming and non-streaming audio formats are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 48-kHz, 24-kHz, 16-kHz, and 8-kHz audio outputs. Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing.
+The supported streaming and non-streaming audio formats are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 48-kHz, 24-kHz, 16-kHz, and 8-kHz audio outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz.
#### [Streaming](#tab/streaming)
riff-48khz-16bit-mono-pcm
*** > [!NOTE]
-> en-US-AriaNeural, en-US-JennyNeural and zh-CN-XiaoxiaoNeural are available in public preview in 48Khz output. Other voices support 24khz upsampled to 48khz output.
-
-> [!NOTE]
+> If you select 48kHz output format, the high-fidelity voice model with 48kHz will be invoked accordingly. The sample rates other than 24kHz and 48kHz can be obtained through upsampling or downsampling when synthesizing, for example, 44.1kHz is downsampled from 48kHz.
+>
> If your selected voice and output format have different bit rates, the audio is resampled as necessary. You can decode the `ogg-24khz-16bit-mono-opus` format by using the [Opus codec](https://opus-codec.org/downloads/). ## Next steps
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
Here's more information about neural text-to-speech features in the Speech servi
* **Asynchronous synthesis of long audio**: Use the [Long Audio API](long-audio-api.md) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
-* **Prebuilt neural voices**: Microsoft neural text-to-speech capability uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. You can use neural voices to:
+* **Prebuilt neural voices**: Microsoft neural text-to-speech capability uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. You can use neural voices to:
- Make interactions with chatbots and voice assistants more natural and engaging. - Convert digital texts such as e-books into audiobooks.
cognitive-services Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/migrate.md
Previously updated : 07/27/2022 Last updated : 09/29/2022
Consider using one of the available quickstart articles to see the latest inform
## How do I migrate to the language service if I am using LUIS?
-If you're using Language Understanding (LUIS), you can [import your LUIS JSON file](../conversational-language-understanding/concepts/backwards-compatibility.md) to the new Conversational language understanding feature.
+If you're using Language Understanding (LUIS), you can [import your LUIS JSON file](../conversational-language-understanding/how-to/migrate-from-luis.md) to the new Conversational language understanding feature.
## How do I migrate to the language service if I am using QnA Maker?
cognitive-services Backwards Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/backwards-compatibility.md
- Title: Conversational Language Understanding backwards compatibility-
-description: Learn about backwards compatibility between LUIS and Conversational Language Understanding
------ Previously updated : 05/13/2022----
-# Backwards compatibility with LUIS applications
-
-You can reuse some of the content of your existing [LUIS](../../../LUIS/what-is-luis.md) applications in [conversational language understanding](../overview.md). When working with conversational language understanding projects, you can:
-* Create conversational language understanding conversation projects from LUIS application JSON files.
-* Create LUIS applications that can be connected to [orchestration workflow](../../orchestration-workflow/overview.md) projects.
-
-> [!NOTE]
-> This guide assumes you have created a Language resource. If you're getting started with the service, see the [quickstart article](../quickstart.md).
-
-## Import a LUIS application JSON file into Conversational Language Understanding
-
-### [Language Studio](#tab/studio)
-
-To import a LUIS application JSON file, click on the icon next to **Create a new project** and select **Import**. Then select the LUIS file. When you import a new project into Conversational Language Understanding, you can select an exported LUIS application JSON file, and the service will automatically create a project with the currently available features.
--
-### [REST API](#tab/rest-api)
----
-## Supported features
-When you import the LUIS JSON application into conversational language understanding, it will create a **Conversations** project with the following features will be selected:
-
-|**Feature**|**Notes**|
-|: - |: - |
-|Intents|All of your intents will be transferred as conversational language understanding intents with the same names.|
-|ML entities|All of your ML entities will be transferred as conversational language understanding entities with the same names. The labels will be persisted and used to train the Learned component of the entity. Structured ML entities will transfer over the lowest level subentities of the structure as different entities and apply their labels accordingly.|
-|Utterances|All of your LUIS utterances will be transferred as conversational language understanding utterances with their intent and entity labels. Structured ML entity labels will only consider the lowest level subentity labels, and all the top level entity labels will be ignored.|
-|Culture|The primary language of the Conversation project will be the LUIS app culture. If the culture is not supported, the importing will fail. |
-|List entities|All of your list entities will be transferred as conversational language understanding entities with the same names. The normalized values and synonyms of each list will be transferred as keys and synonyms in the list component for the conversational language understanding entity.|
-|Prebuilt entities|All of your prebuilt entities will be transferred as conversational language understanding entities with the same names. The conversational language understanding entity will have the relevant [prebuilt entities](entity-components.md#prebuilt-component) enabled if they are supported. |
-|Required entity features in ML entities|If you had a prebuilt entity or a list entity as a required feature to another ML entity, then the ML entity will be transferred as a conversational language understanding entity with the same name and its labels will apply. The conversational language understanding entity will include the required feature entity as a component. The [overlap method](entity-components.md#entity-options) will be set as ΓÇ£Exact OverlapΓÇ¥ for the conversational language understanding entity.|
-|Non-required entity features in ML entities|If you had a prebuilt entity or a list entity as a non-required feature to another ML entity, then the ML entity will be transferred as a conversational language understanding entity with the same name and its ML labels will apply. If an ML entity was used as a feature to another ML entity, it will not be transferred over.|
-|Roles|All of your roles will be transferred as conversational language understanding entities with the same names. Each role will be its own conversational language understanding entity. The roleΓÇÖs entity type will determine which component is populated for the role. Roles on prebuilt entities will transfer as conversational language understanding entities with the prebuilt entity component enabled and the role labels transferred over to train the Learned component. Roles on list entities will transfer as conversational language understanding entities with the list entity component populated and the role labels transferred over to train the Learned component. Roles on ML entities will be transferred as conversational language understanding entities with their labels applied to train the Learned component of the entity. |
-
-## Unsupported features
-
-When you import the LUIS JSON application into conversational language understanding, certain features will be ignored, but they will not block you from importing the application. The following features will be ignored:
-
-|**Feature**|**Notes**|
-|: - |: - |
-|Application Settings|The settings such as Normalize Punctuation, Normalize Diacritics, and Use All Training Data were meant to improve predictions for intents and entities. The new models in conversational language understanding are not sensitive to small changes such as punctuation and are therefore not available as settings.|
-|Features|Phrase list features and features to intents will all be ignored. Features were meant to introduce semantic understanding for LUIS that conversational language understanding can provide out of the box with its new models.|
-|Patterns|Patterns were used to cover for lack of quality in intent classification. The new models in conversational language understanding are expected to perform better without needing patterns.|
-|`Pattern.Any` Entities|`Pattern.Any` entities were used to cover for lack of quality in ML entity extraction. The new models in conversational language understanding are expected to perform better without needing `Pattern.Any` entities.|
-|Regex Entities| Not currently supported |
-|Structured ML Entities| Not currently supported |
-
-## Use a published LUIS application in orchestration workflow projects
-
-You can only connect to published LUIS applications that are owned by the same Language resource that you use for Conversational Language Understanding. You can change the authoring resource to a Language **S** resource in **West Europe** applications. See the [LUIS documentation](../../../luis/luis-how-to-azure-subscription.md#assign-luis-resources) for steps on assigning a different resource to your LUIS application. You can also export then import the LUIS applications into your Language resource. You must train and publish LUIS applications for them to appear in Conversational Language Understanding when you want to connect them to orchestration projects.
--
-## Next steps
-
-[Conversational Language Understanding overview](../overview.md)
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/faq.md
Previously updated : 07/07/2022 Last updated : 09/29/2022
Unlike LUIS, you cannot label the same text as 2 different entities. Learned com
## Can I import a LUIS JSON file into conversational language understanding?
-Yes, you can [import any LUIS application](./concepts/backwards-compatibility.md) JSON file from the latest version in the service.
+Yes, you can [import any LUIS application](./how-to/migrate-from-luis.md) JSON file from the latest version in the service.
## Can I import a LUIS `.LU` file into conversational language understanding?
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/create-project.md
Previously updated : 06/03/2022 Last updated : 09/29/2022
You can export a Conversational Language Understanding project as a JSON file at
That project can be reimported as a new project. If you import a project with the exact same name, it replaces the project's data with the newly imported project's data.
-If you have an existing LUIS application, you can _import_ the LUIS application JSON to Conversational Language Understanding directly, and it will create a Conversation project with all the pieces that are currently available: Intents, ML entities, and utterances. See [backwards compatibility with LUIS](../concepts/backwards-compatibility.md) for more information.
+If you have an existing LUIS application, you can _import_ the LUIS application JSON to Conversational Language Understanding directly, and it will create a Conversation project with all the pieces that are currently available: Intents, ML entities, and utterances. See [the LUIS migration article](../how-to/migrate-from-luis.md) for more information.
To import a project, click on the arrow button next to **Create a new project** and select **Import**, then select the LUIS or Conversational Language Understanding JSON file.
cognitive-services Migrate From Luis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis.md
+
+ Title: Conversational Language Understanding backwards compatibility
+
+description: Learn about backwards compatibility between LUIS and Conversational Language Understanding
++++++ Last updated : 09/08/2022++++
+# Migrate from Language Understanding (LUIS) to conversational language understanding (CLU)
+
+[Conversational language understanding (CLU)](../overview.md) is a cloud-based AI offering in Azure Cognitive Services for Language. It's the newest generation of [Language Understanding (LUIS)](../../../luis/what-is-luis.md) and offers backwards compatibility with previously created LUIS applications. CLU employs state-of-the-art machine learning intelligence to allow users to build a custom natural language understanding model for predicting intents and entities in conversational utterances.
+
+CLU offers the following advantages over LUIS:
+
+- Improved accuracy with state-of-the-art machine learning models for better intent classification and entity extraction.
+- Multilingual support for model learning and training.
+- Ease of integration with different CLU and [custom question answering](../../question-answering/overview.md) projects using [orchestration workflow](../../orchestration-workflow/overview.md).
+- The ability to add testing data within the experience using Language Studio and APIs for model performance evaluation prior to deployment.
+
+To get started, you can [create a new project](../quickstart.md?pivots=language-studio#create-a-conversational-language-understanding-project) or [migrate your LUIS application](#migrate-your-luis-applications).
+
+## Comparison between LUIS and CLU
+
+The following table presents a side-by-side comparison between the features of LUIS and CLU. It also highlights the changes to your LUIS application after migrating to CLU. Click on the linked concept to learn more about the changes.
+
+|LUIS features | CLU features | Post migration |
+|::|:-:|:--:|
+|Machine-learned and Structured ML entities| Learned [entity components](#how-are-entities-different-in-clu) |Machine-learned entities without subentities will be transferred as CLU entities. Structured ML entities will only transfer leaf nodes (lowest level subentities without their own subentities) as entities in CLU. The name of the entity in CLU will be the name of the subentity concatenated with the parent. For example, _Order.Size_|
+|List and prebuilt entities| List and prebuilt [entity components](#how-are-entities-different-in-clu) | List and prebuilt entities will be transferred as entities in CLU with a populated entity component based on the entity type.|
+|Regex and `Pattern.Any` entities| Not currently available | `Pattern.Any` entities will be removed. Regex entities will be removed.|
+|Single culture for each application|[Multilingual models](#how-is-conversational-language-understanding-multilingual) enable multiple languages for each project. |The primary language of your project will be set as your LUIS application culture. Your project can be trained to extend to different languages.|
+|Entity roles |[Roles](#how-are-entity-roles-transferred-to-clu) are no longer needed. | Entity roles will be transferred as entities.|
+|Settings for: normalize punctuation, normalize diacritics, normalize word form, use all training data |[Settings](#how-is-the-accuracy-of-clu-better-than-luis) are no longer needed. |Settings will not be transferred. |
+|Patterns and phrase list features|[Patterns and Phrase list features](#how-is-the-accuracy-of-clu-better-than-luis) are no longer needed. |Patterns and phrase list features will not be transferred. |
+|Entity features| Entity components| List or prebuilt entities added as features to an entity will be transferred as added components to that entity. [Entity features](#how-do-entity-features-get-transferred-in-clu) will not be transferred for intents. |
+|Intents and utterances| Intents and utterances |All intents and utterances will be transferred. Utterances will be labeled with their transferred entities. |
+|Application GUIDs |Project names| A project will be created for each migrating application with the application name. Any special characters in the application names will be removed in CLU.|
+|Versioning| Can only be stored [locally](#how-do-i-manage-versions-in-clu). | A project will be created for the selected application version. |
+|Evaluation using batch testing |Evaluation using testing sets | [Uploading your testing dataset](../how-to/tag-utterances.md#how-to-label-your-utterances) will be required.|
+|Role-Based Access Control (RBAC) for LUIS resources |Role-Based Access Control (RBAC) available for Language resources |Language resource RBAC must be [manually added after migration](../../concepts/role-based-access-control.md). |
+|Single training mode| Standard and advanced [training modes](#how-are-the-training-times-different-in-clu) | Training will be required after application migration. |
+|Two publishing slots and version publishing |Ten deployment slots with custom naming | Deployment will be required after the applicationΓÇÖs migration and training. |
+|LUIS authoring APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Authoring REST APIs](/rest/api/language/conversational-analysis-authoring). | For more information, see the [quickstart article](../quickstart.md?pivots=rest-api) for information on the CLU authoring APIs. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU authoring APIs. |
+|LUIS Runtime APIs and SDK support in .NET, Python, Java, and Node.js |[CLU Runtime APIs](/rest/api/language/conversation-analysis-runtime). CLU Runtime SDK support for [.NET](/dotnet/api/overview/azure/ai.language.conversations-readme-pre?view=azure-dotnet-preview&preserve-view=true) and [Python](/python/api/overview/azure/ai-language-conversations-readme?view=azure-python-preview&preserve-view=true). | See [how to call the API](../how-to/call-api.md#use-the-client-libraries-azure-sdk) for more information. [Refactoring](#do-i-have-to-refactor-my-code-if-i-migrate-my-applications-from-luis-to-clu) will be necessary to use the CLU runtime API response. |
+
+## Migrate your LUIS applications
+
+Use the following steps to migrate your LUIS application using either the LUIS portal or REST API.
+
+# [LUIS portal](#tab/luis-portal)
+
+## Migrate your LUIS applications using the LUIS portal
+
+Follow these steps to begin migration using the [LUIS Portal](https://www.luis.ai/):
+
+1. After logging into the LUIS portal, click the button on the banner at the top of the screen to launch the migration wizard. The migration will only copy your selected LUIS applications to CLU.
+
+ :::image type="content" source="../media/backwards-compatibility/banner.svg" alt-text="A screenshot showing the migration banner in the LUIS portal." lightbox="../media/backwards-compatibility/banner.svg":::
++
+ The migration overview tab provides a brief explanation of conversational language understanding and its benefits. Press Next to proceed.
+
+ :::image type="content" source="../media/backwards-compatibility/migration-overview.svg" alt-text="A screenshot showing the migration overview window." lightbox="../media/backwards-compatibility/migration-overview.svg":::
+
+1. Determine the Language resource that you wish to migrate your LUIS application to. If you have already created your Language resource, select your Azure subscription followed by your Language resource, and then click **Next**. If you don't have a Language resource, click the link to create a new Language resource. Afterwards, select the resource and click **Next**.
+
+ :::image type="content" source="../media/backwards-compatibility/select-resource.svg" alt-text="A screenshot showing the resource selection window." lightbox="../media/backwards-compatibility/select-resource.svg":::
+
+1. Select all your LUIS applications that you want to migrate, and specify each of their versions. Click **Next**. After selecting your application and version, you will be prompted with a message informing you of any features that won't be carried over from your LUIS application.
+
+ > [!NOTE]
+ > Special characters are not supported by conversational language understanding. Any special characters in your selected LUIS application names will be removed in your new migrated applications.
+
+ :::image type="content" source="../media/backwards-compatibility/select-applications.svg" alt-text="A screenshot showing the application selection window." lightbox="../media/backwards-compatibility/select-applications.svg":::
+
+1. Review your Language resource and LUIS applications selections. Click **Finish** to migrate your applications.
+
+1. A popup window will let you track the migration status of your applications. Applications that have not started migrating will have a status of **Not started**. Applications that have begun migrating will have a status of **In progress**, and once they have finished migrating their status will be **Succeeded**. A **Failed** application means that you must repeat the migration process. Once the migration has completed for all applications, select **Done**.
+
+ :::image type="content" source="../media/backwards-compatibility/migration-progress.svg" alt-text="A screenshot showing the application migration progress window." lightbox="../media/backwards-compatibility/migration-progress.svg":::
+
+1. After your applications have migrated, you can perform the following steps:
+
+ * [Train your model](../how-to/train-model.md?tabs=language-studio)
+ * [Deploy your model](../how-to/deploy-model.md?tabs=language-studio)
+ * [Call your deployed model](../how-to/call-api.md?tabs=language-studio)
+
+# [REST API](#tab/rest-api)
+
+## Migrate your LUIS applications using REST APIs
+
+Follow these steps to begin migration programmatically using the CLU Authoring REST APIs:
+
+1. Export your LUIS application in JSON format. You can use the [LUIS Portal](https://www.luis.ai/) to export your applications, or the [LUIS programmatic APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40).
+
+1. Submit a POST request using the following URL, headers, and JSON body to import LUIS application into your CLU project. CLU does not support names with special characters so remove any special characters from the project name.
+
+ ### Request URL
+ ```rest
+ {ENDPOINT}/language/authoring/analyze-conversations/projects/{PROJECT-NAME}/:import?api-version={API-VERSION}&format=luis
+ ```
+
+ |Placeholder |Value | Example |
+ ||||
+ |`{ENDPOINT}` | The endpoint for authenticating your API request. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
+ |`{PROJECT-NAME}` | The name for your project. This value is case sensitive. | `myProject` |
+ |`{API-VERSION}` | The version of the API you are calling. The value referenced here is for the latest released [model version](../../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data) released. | `2022-05-01` |
+
+ ### Headers
+
+ Use the following header to authenticate your request.
+
+ |Key|Value|
+ |--|--|
+ |`Ocp-Apim-Subscription-Key`| The key to your resource. Used for authenticating your API requests.|
+
+ ### JSON body
+
+ Use the exported LUIS JSON data as your body.
+
+1. After your applications have migrated, you can perform the following steps:
+
+ * [Train your model](../how-to/train-model.md?tabs=language-studio)
+ * [Deploy your model](../how-to/deploy-model.md?tabs=language-studio)
+ * [Call your deployed model](../how-to/call-api.md?tabs=language-studio)
+++
+## Frequently asked questions
+
+### Which LUIS JSON version is supported by CLU?
+
+CLU supports the model JSON version 7.0.0. If the JSON format is older, it would need to be imported into LUIS first, then exported from LUIS with the most recent version.
+
+### How are entities different in CLU?
+
+In CLU, a single entity can have multiple entity components, which are different methods for extraction. Those components are then combined together using rules you can define. The available components are: learned (equivalent to ML entities in LUIS), list, and prebuilt.
+
+After migrating, your structured machine-learned leaf nodes and bottom-level subentities will be transferred to the new CLU model while all the parent entities and higher-level entities will be ignored. The name of the entity will be the bottom-level entityΓÇÖs name concatenated with its parent entity.
+
+#### Example:
+
+LUIS entity:
+
+* Pizza Order
+ * Topping
+ * Size
+
+Migrated LUIS entity in CLU:
+
+* Pizza Order.Topping
+* Pizza Order.Size
+
+For more information on entity components, see [Entity components](../concepts/entity-components.md).
+
+### How are entity roles transferred to CLU?
+
+Your roles will be transferred as distinct entities along with their labeled utterances. Each roleΓÇÖs entity type will determine which entity component will be populated. For example, a list entity role will be transferred as an entity with the same name as the role, with a populated list component.
+
+### How is conversational language understanding multilingual?
+
+Conversational language understanding projects accept utterances in different languages. Furthermore, you can train your model in one language and extend it to predict in other languages.
+
+#### Example:
+
+Training utterance (English): *How are you?*
+
+Labeled intent: Greeting
+
+Runtime utterance (French): *Comment ça va?*
+
+Predicted intent: Greeting
+
+### How are entity confidence scores different in CLU?
+
+Any extracted entity has a 100% confidence score and therefore entity confidence scores should not be used to make decisions between entities.
+
+### How is the accuracy of CLU better than LUIS?
+
+CLU uses state-of-the-art models to enhance machine learning performance of different models of intent classification and entity extraction.
+
+These models are insensitive to minor variations, removing the need for the following settings: _Normalize punctuation_, _normalize diacritics_, _normalize word form_, and _use all training data_.
+
+Additionally, the new models do not support phrase list features as they no longer require supplementary information from the user to provide semantically similar words for better accuracy. Patterns were also used to provide improved intent classification using rule-based matching techniques that are not necessary in the new model paradigm.
+
+### How do I manage versions in CLU?
+
+Although CLU does not offer versioning, you can export your CLU projects using [Language Studio](https://language.cognitive.azure.com/home) or [programmatically](../how-to/fail-over.md#export-your-primary-project-assets) and store different versions of the assets locally.
+
+### Why is CLU classification different from LUIS? How does None classification work?
+
+CLU presents a different approach to training models by using multi-classification as opposed to binary classification. As a result, the interpretation of scores is different and also differs across training options. While you are likely to achieve better results, you have to observe the difference in scores and determine a new threshold for accepting intent predictions. You can easily add a confidence score threshold for the [None intent](../concepts/none-intent.md) in your project settings. This will return *None* as the top intent if the top intent did not exceed the confidence score threshold provided.
+
+### Do I need more data for CLU models than LUIS?
+
+The new CLU models have better semantic understanding of language than in LUIS, and in turn help make models generalize with a significant reduction of data. While you shouldnΓÇÖt aim to reduce the amount of data that you have, you should expect better performance and resilience to variations and synonyms in CLU compared to LUIS.
+
+### If I donΓÇÖt migrate my LUIS apps, will they be deleted?
+
+Your existing LUIS applications will be available until October 1, 2025. After that time you will no longer be able to use those applications, the service endpoints will no longer function, and the applications will be permanently deleted.
+
+### Are .LU files supported on CLU?
+
+Only JSON format is supported by CLU. You can import your .LU files to LUIS and export them in JSON format, or you can follow the migration steps above for your application.
+
+### What are the service limits of CLU?
+
+See the [service limits](../service-limits.md) article for more information.
+
+### Do I have to refactor my code if I migrate my applications from LUIS to CLU?
+
+The API objects of CLU applications are different from LUIS and therefore code refactoring will be necessary.
+
+If you are using the LUIS [programmatic](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40) and [runtime](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) APIs, you can replace them with their equivalent APIs.
+
+[CLU authoring APIs](/rest/api/language/conversational-analysis-authoring): Instead of LUIS's specific CRUD APIs for individual actions such as _add utterance_, _delete entity_, and _rename intent_, CLU offers an [import API](/rest/api/language/conversational-analysis-authoring/import) that replaces the full content of a project using the same name. If your service used LUIS programmatic APIs to provide a platform for other customers, you must consider this new design paradigm. All other APIs such as: _listing projects_, _training_, _deploying_, and _deleting_ are available. APIs for actions such as _importing_ and _deploying_ are asynchronous operations instead of synchronous as they were in LUIS.
+
+[CLU runtime APIs](/rest/api/language/conversation-analysis-runtime/analyze-conversation): The new API request and response includes many of the same parameters such as: _query_, _prediction_, _top intent_, _intents_, _entities_, and their values. The CLU response object offers a more straightforward approach. Entity predictions are provided as they are within the utterance text, and any additional information such as resolution or list keys are provided in extra parameters called `extraInformation` and `resolution`. See the [reference documentation](/rest/api/language/conversation-analysis-runtime/analyze-conversation) for more information on the API response structure.
+
+You can use the [.NET](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Language.Conversations_1.0.0-beta.3/sdk/cognitivelanguage/Azure.AI.Language.Conversations/samples/) or [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md) CLU runtime SDK to replace the LUIS runtime SDK. There is currently no authoring SDK available for CLU.
+
+### How are the training times different in CLU?
+
+CLU offers standard training, which trains and learns in English and is comparable to the training time of LUIS. It also offers advanced training, which takes a considerably longer duration as it extends the training to all other [supported languages](../language-support.md).
+
+### How can I link subentities to parent entities from my LUIS application in CLU?
+
+One way to implement the concept of subentities in CLU is to combine the subentities into different entity components within the same entity.
+
+#### Example:
+
+LUIS Implementation:
+
+* Pizza Order (entity)
+ * Size (subentity)
+ * Quantity (subentity)
+
+CLU Implementation:
+
+* Pizza Order (entity)
+ * Size (list entity component: small, medium, large)
+ * Quantity (prebuilt entity component: number)
+
+In CLU, you would label the entire span for _Pizza Order_ inclusive of the size and quantity, which would return the pizza order with a list key for size, and a number value for quantity in the same entity object.
+
+For more complex problems where entities contain several levels of depth, you can create a project for each couple of levels of depth in the entity structure. This gives you the option to:
+1. Pass the utterance to each project.
+1. Combine the analyses of each project in the stage proceeding CLU.
+
+For a detailed example on this concept, check out the pizza bot sample available on [GitHub](https://github.com/Azure-Samples/cognitive-service-language-samples/tree/main/CoreBotWithCLU).
+
+### How do entity features get transferred in CLU?
+
+Entities used as features for intents will not be transferred. Entities used as features for other entities will populate the relevant component of the entity. For example, if a list entity named _SizeList_ was used as a feature to a machine-learned entity named _Size_, then the _Size_ entity will be transferred to CLU with the list values from _SizeList_ added to its list component.
+
+### How will my LUIS applications be named in CLU after migration?
+
+Any special characters in the LUIS application name will be removed. If the cleared name length is greater than 50 characters, the extra characters will be removed. If the name after removing special characters is empty (for example, if the LUIS application name was `@@`), the new name will be _untitled_. If there is already a conversational language understanding project with the same name, the migrated LUIS application will be appended with `_1` for the first duplicate and increase by 1 for each additional duplicate. In case the new nameΓÇÖs length is 50 characters and it needs to be renamed, the last 1 or 2 characters will be removed to be able to concatenate the number and still be within the 50 characters limit.
+
+## Migration from LUIS Q&A
+
+If you have any questions that were unanswered in this article, consider leaving your questions at our [Microsoft Q&A thread](https://aka.ms/luis-migration-qna-thread).
+
+## Next steps
+* [Quickstart: create a CLU project](../quickstart.md)
+* [CLU language support](../language-support.md)
+* [CLU FAQ](../faq.md)
cognitive-services Integrate Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/tutorials/integrate-power-bi.md
In this tutorial, you'll learn how to:
## Load customer data
-To get started, open Power BI Desktop and load the comma-separated value (CSV) file `FabrikamComments.csv` that you downloaded in [Prerequisites](#prerequisites). This file represents a day's worth of hypothetical activity in a fictional small company's support forum.
+To get started, open Power BI Desktop and load the comma-separated value (CSV) file that you downloaded as part of the [prerequisites](#prerequisites). This file represents a day's worth of hypothetical activity in a fictional small company's support forum.
> [!NOTE] > Power BI can use data from a wide variety of web-based sources, such as SQL databases. See the [Power Query documentation](/power-query/connectors/) for more information.
In the main Power BI Desktop window, select the **Home** ribbon. In the **Extern
![The Get Data button](../media/tutorials/power-bi/get-data-button.png)
-The Open dialog appears. Navigate to your Downloads folder, or to the folder where you downloaded the `FabrikamComments.csv` file. Click `FabrikamComments.csv`, then the **Open** button. The CSV import dialog appears.
+The Open dialog appears. Navigate to your Downloads folder, or to the folder where you downloaded the CSV file. Click on the name of the file, then the **Open** button. The CSV import dialog appears.
![The CSV Import dialog](../media/tutorials/power-bi/csv-import.png)
communication-services Emergency Calling Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/emergency-calling-concept.md
Emergency dialing is automatically enabled for all users of the Azure Communicat
The Emergency service is temporarily free to use for Azure Communication Services customers within reasonable use, however, billing for the service will be enabled in 2022. Calls to 911 are capped at 10 concurrent calls per Azure resource.
+## Emergency calling with Azure Communication Services direct routing
+
+Emergency call is a regular call from a direct routing perspective. If you want to implement emergency calling with Azure Communication Services direct routing, you need to make sure that there is a routing rule for your emergency number (911, 112, etc.). You also need to make sure that your carrier processes emergency calls properly.
+There is also an option to use purchased number as a caller ID for direct routing calls, in such case if there is no voice routing rule for emergency number, the call will fall back to Microsoft network, and we will treat it as a regular emergency call. Learn more about [voice routing fall back](./direct-routing-provisioning.md#voice-routing-considerations).
+ ## Next steps ### Quickstarts
communication-services Inbound Calling Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/inbound-calling-capabilities.md
Inbound PSTN calling is currently supported in GA for Dynamics Omnichannel. You
**Inbound calling with Dynamics 365 Omnichannel (OC)**
- Supported in General Availability, to set up inbound calling for Dynamics 365 OC with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling)
+Supported in General Availability, to set up inbound calling for Dynamics 365 OC with direct routing or Voice Calling (PSTN) follow [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling)
- **Inbound calling with Power Virtual Agents**
+**Inbound calling with Power Virtual Agents**
- *Coming soon*
+*Coming soon*
-**Inbound calling with ACS Client Calling SDK**
+**Inbound calling with ACS Call Automation SDK**
-*Coming soon*
+[Available in private preview](../voice-video-calling/call-automation.md)
**Inbound calling with Azure Bot Framework**
communication-services Known Limitations Acs Telephony https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/known-limitations-acs-telephony.md
+
+ Title: Azure direct routing known limitations - Azure Communication Services
+description: Known limitations of direct routing in Azure Communication Services.
+++++ Last updated : 09/29/2022+++++
+# Known limitations in Azure telephony
+
+This article provides information about limitations and known issues related to telephony in Azure Communication Services.
+
+## Azure Communication Services direct routing known limitations
+
+- Anonymous calling isn't supported
+ - will be fixed in GA release
+- Different set of Media Processors (MP) is used with different IP addresses. Currently [any Azure IP address](./direct-routing-infrastructure.md#media-traffic-ip-and-port-ranges) can be used for media connection between Azure MP and Session Border Controller (SBC).
+ - will be fixed in GA release
+- Azure Communication Services SBC Fully Qualified Domain Name (FQDN) must be different from Teams Direct Routing SBC FQDN
+- Wildcard SBC certificates require extra workaround. Contact Azure support for details.
+ - will be fixed in GA release
+- Media bypass/optimization isn't supported
+- No indication of SBC connection status/details in Azure portal
+ - will be fixed in GA release
+- Azure Communication Services direct routing isn't available in Government Clouds
+- Multi-tenant trunks aren't supported
+- Location-based routing isn't supported
+- No quality dashboard is available for customers
+- Enhanced 911 isn't supported
+- PSTN numbers missing from Call Summary logs
+
+## Next steps
+
+### Conceptual documentation
+
+- [Phone number types in Azure Communication Services](./plan-solution.md)
+- [Plan for Azure direct routing](./direct-routing-infrastructure.md)
+- [Pair the Session Border Controller and configure voice routing](./direct-routing-provisioning.md)
+- [Pricing](../pricing.md)
+
+### Quickstarts
+
+- [Call to Phone](../../quickstarts/telephony/pstn-call.md)
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
Title: Connect to Azure Blob Storage
-description: Create workflows that manage blobs in Azure storage accounts using Azure Logic Apps.
+ Title: Connect to Azure Blob Storage from workflows
+description: Connect to Azure Blob Storage from workflows using Azure Logic Apps.
ms.suite: integration Previously updated : 08/19/2022 Last updated : 09/14/2022 tags: connectors
-# Create and manage blobs in Azure Blob Storage by using Azure Logic Apps
+# Connect to Azure Blob Storage from workflows in Azure Logic Apps
-From your workflow in Azure Logic Apps, you can access and manage files stored as blobs in your Azure storage account by using the [Azure Blob Storage connector](/connectors/azureblobconnector/). This connector provides triggers and actions that your workflow can use for blob operations. You can then automate tasks to manage files in your storage account. For example, [connector actions](/connectors/azureblobconnector/#actions) include checking, deleting, reading, and uploading blobs. The [available trigger](/connectors/azureblobconnector/#triggers) fires when a blob is added or modified.
-You can connect to Blob Storage from both **Logic App (Consumption)** and **Logic App (Standard)** resource types. You can use the connector with logic app workflows in multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE). With **Logic App (Standard)**, you can use either the *built-in* **Azure Blob** operations or the **Azure Blob Storage** managed connector operations.
+This article shows how to access your Azure Blob Storage account and container from a workflow in Azure Logic Apps using the Azure Blob Storage connector. This connector provides triggers and actions that your workflow can use for blob operations. You can then create automated workflows that run when triggered by events in your storage container or in other systems, and run actions to work with data in your storage container.
-## Prerequisites
+For example, you can access and manage files stored as blobs in your Azure storage account.
-- An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+You can connect to Azure Blob Storage from a workflow in **Logic App (Consumption)** and **Logic App (Standard)** resource types. You can use the connector with logic app workflows in multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE). With **Logic App (Standard)**, you can use either the **Azure Blob** *built-in* connector operations or the **Azure Blob Storage** managed connector operations.
-- An [Azure storage account and storage container](../storage/blobs/storage-quickstart-blobs-portal.md)
+## Connector technical reference
-- A logic app workflow from which you want to access your Azure Storage account. If you want to start your workflow with a Blob trigger, you need a [blank logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+The Azure Blob Storage connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
-## Limits
+| Logic app | Environment | Connector version |
+|--|-|-|
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector (Standard class). For more information, review the following documentation: <br><br>- [Azure Blob Storage managed connector reference](/connectors/azureblobconnector) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector (Standard class) and ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [Azure Blob Storage managed connector reference](/connectors/azureblobconnector) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector (Azure-hosted) and built-in connector, which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in version differs in the following ways: <br><br>- The built-in version connects directly to your Azure Storage account requiring only a connection string. <br><br>- The built-in version can directly access Azure virtual networks. <br><br>For more information, review the following documentation: <br><br>- [Azure Blob Storage managed connector reference](/connectors/azureblobconnector) <br>- [Azure Blob built-in connector reference](#built-in-connector-operations) section later in this article <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
+
+## Limitations
- For logic app workflows running in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE-labeled version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead. -- By default, Blob actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB but up to 1024 MB, Blob actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The [**Get blob content** action](/connectors/azureblobconnector/#get-blob-content) implicitly uses chunking.
+- By default, Azure Blob Storage managed connector actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB but up to 1024 MB, Blob actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The [**Get blob content** action](/connectors/azureblobconnector/#get-blob-content) implicitly uses chunking.
+
+- Azure Blob Storage triggers don't support chunking. When a trigger requests file content, the trigger selects only files that are 50 MB or smaller. To get files larger than 50 MB, follow this pattern:
-- Blob triggers don't support chunking. When a trigger requests file content, the trigger selects only files that are 50 MB or smaller. To get files larger than 50 MB, follow this pattern:
+ 1. Use a Blob trigger that returns file properties, such as [**When a blob is added or modified (properties only)**](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)).
+
+ 1. Follow the trigger with the Azure Blob Storage managed connector action named [**Get blob content**](/connectors/azureblobconnector/#get-blob-content), which reads the complete file and implicitly uses chunking.
+
+## Prerequisites
- - Use a Blob trigger that returns file properties, such as [**When a blob is added or modified (properties only)**](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)).
+- An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- - Follow the trigger with the Blob action named [**Get blob content**](/connectors/azureblobconnector/#get-blob-content), which reads the complete file and implicitly uses chunking.
+- An [Azure storage account and blob container](../storage/blobs/storage-quickstart-blobs-portal.md)
-## Connector reference
+- A logic app workflow from which you want to access your Azure Storage account. If you want to start your workflow with an Azure Blob Storage trigger, you need a [blank logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-For more technical details about this connector, such as triggers, actions, and limits, review the [connector's reference page](/connectors/azureblobconnector/).
+- The logic app workflow where you connect to your Azure Storage account. To start your workflow with an Azure Blob trigger, you have to start with a blank workflow. To use an Azure Blob action in your workflow, start your workflow with any trigger.
<a name="add-trigger"></a> ## Add a Blob trigger
-In Azure Logic Apps, every workflow must start with a [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts), which fires when a specific event happens or when a specific condition is met.
+A Consumption logic app workflow can use only the Azure Blob Storage managed connector. However, a Standard logic app workflow can use the Azure Blob Storage managed connector and the Azure blob built-in connector. Although both connector versions have only one Blob trigger, the trigger name differs as follows, based on whether you're working with a Consumption or Standard workflow:
-Only one Blob trigger exists and has either of the following names, based on whether you're working with a Consumption or Standard logic app workflow:
+| Logic app | Connector version | Trigger name | Description |
+|--|-|--|-|
+| Consumption | Managed connector only | **When a blob is added or modified (properties only)** | The trigger fires when a blob's properties are added or updated in your storage container's root folder. When you set up the managed trigger, the managed version ignores existing blobs in your storage container. |
+| Standard | - Built-in connector <br><br>- Managed connector | - Built-in: **When a blob is added or updated** <br><br>- Managed: **When a blob is added or modified (properties only)** | - Built-in: The trigger fires when a blob is added or updated in your storage container, and fires for any nested folders in your storage container, not just the root folder. When you set up the built-in trigger, the built-in version processes all existing blobs in your storage container. <br><br>- Managed: The trigger fires when a blob's properties are added or updated in your storage container's root folder. When you set up the managed trigger, the managed version ignores existing blobs in your storage container. |
-| Logic app type | Trigger name | Description |
-|-|--|-|
-| Consumption | Managed connector only: **When a blob is added or modified (properties only)** | The trigger fires when a blob's properties are added or updated in your storage container's root folder. |
-| Standard | - Built-in: **When a blob is Added or Modified in Azure Storage** <br><br>- Managed connector: **When a blob is added or modified (properties only)** | - Built-in: The trigger fires when a blob is added or updated in your storage container. The trigger also fires for any nested folders in your storage container, not just the root folder. <br><br>- Managed connector: The trigger fires when a blob's properties are added or updated in your storage container's root folder. |
-||||
+The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use the following tools to create logic app workflows:
-> [!IMPORTANT]
-> When you set up the Blob trigger, the built-in version processes all existing blobs in the container, while the managed version ignores existing blobs in the container.
+- Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md)
-When the trigger fires each time, Azure Logic Apps creates a logic app instance and starts running the workflow.
+- Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
### [Consumption](#tab/consumption)
-To add a Blob trigger to a logic app workflow in multi-tenant Azure Logic Apps, follow these steps:
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+1. On the designer, under the search box, select **Standard**. In the search box, enter **Azure blob**.
+
+1. From the triggers list, select the trigger that you want.
+
+ This example continues with the trigger named **When a blob is added or modified (properties only)**.
+
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-add-trigger.png" alt-text="Screenshot showing Azure portal, Consumption workflow designer, and Azure Blob Storage trigger selected.":::
+
+1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**.
+
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for your connection |
+ | **Authentication Type** | Yes | The [authentication type](../storage/common/authorize-data-access.md) for your storage account. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
-1. Under the designer search box, make sure that **All** is selected. In the search box, enter **Azure blob**. From the **Triggers** list, select the trigger named **When a blob is added or modified (properties only)**.
+ For example, this connection uses access key authentication and provides the access key value for the storage account along with the following property values:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-trigger-add.png" alt-text="Screenshot showing Azure portal and workflow designer with a Consumption logic app and the trigger named 'When a blob is added or modified (properties only)' selected.":::
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. |
+ | **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Show**. Copy and save the primary key value. |
-1. If you're prompted for connection details, [create a connection to your Azure Blob Storage account](#connect-blob-storage-account).
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-trigger-create-connection.png" alt-text="Screenshot showing Consumption workflow, Azure Blob Storage trigger, and example connection information.":::
-1. Provide the necessary information for the trigger.
+1. After the trigger information box appears, provide the necessary information.
- 1. For the **Container** property value, select the folder icon to browse for your blob storage container. Or, enter the path manually using the syntax **/<*container-name*>**, for example:
+ For the **Container** property value, select the folder icon to browse for your blob container. Or, enter the path manually using the syntax **/<*container-name*>**, for example:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-trigger-configure.png" alt-text="Screenshot showing Azure Blob trigger with parameters configuration.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-trigger-information.png" alt-text="Screenshot showing Consumption workflow with Azure Blob Storage trigger, and example trigger information.":::
- 1. Configure other trigger settings as needed.
+1. To add other properties available for this trigger, open the **Add new parameter list**, and select the properties that you want.
-1. Add one or more actions to your workflow.
+ For more information, review [Azure Blob Storage managed connector trigger properties](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)-(v2)).
-1. On the designer toolbar, select **Save** to save your changes.
+1. Add any other actions that your workflow requires.
+
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
### [Standard](#tab/standard)
-To add a Blob trigger to a logic app workflow in single-tenant Azure Logic Apps, follow these steps:
+The steps to add and use a Blob trigger differ based on whether you want to use the built-in connector or the managed, Azure-hosted connector.
-1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+- [**Built-in trigger**](#built-in-connector-trigger): Describes the steps to add the built-in trigger.
+
+- [**Managed trigger**](#managed-connector-trigger): Describes the steps to add the managed trigger.
+
+<a name="built-in-connector-trigger"></a>
+
+#### Built-in connector trigger
-1. On the designer, select **Choose an operation**.
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
-1. In the **Add a trigger** pane that opens, under the **Choose an operation** search box, you can select either **Built-in** to find the **Azure Blob** *built-in* trigger, or select **Azure** to find the **Azure Blob Storage** *managed connector* trigger.
+1. On the designer, select **Choose an operation**. Under the **Choose an operation** search box, select **Built-in**.
- This example uses the built-in **Azure Blob** trigger.
+1. In the search box, enter **Azure blob**. From the triggers list, select the trigger that you want.
-1. Under the search box, select **Built-in**. In the search box, enter **Azure blob**.
+ This example continues with the trigger named **When a blob is added or updated**.
-1. From the **Triggers** list, select the built-in trigger named **When a blob is Added or Modified in Azure Storage**.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-add-built-in-trigger.png" alt-text="Screenshot showing Azure portal, Standard workflow designer, and Azure Blob built-in trigger selected.":::
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-trigger-add.png" alt-text="Screenshot showing Azure portal, workflow designer, Standard logic app workflow and Azure Blob trigger selected.":::
+1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**.
-1. If you're prompted for connection details, [create a connection to your Azure Storage account](#connect-blob-storage-account).
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for your connection |
+ | **Authentication type** | Yes | The [authentication type](../storage/common/authorize-data-access.md) for your storage account. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
+
+ For example, this connection uses connection string authentication and provides the connection string value for the storage account:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Storage account connection string** | Yes, <br>but only for connection string authentication | <*storage-account-connection-string*> | The connection string for your Azure storage account. <br><br>**Note**: To find the connection string, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Connection string** > **Show**. Copy and save the connection string for the primary key. |
-1. Provide the necessary information for the trigger. On the **Parameters** tab, in the **Blob Path** property, enter the name of the folder that you want to monitor.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-create-connection.png" alt-text="Screenshot showing Standard workflow, Azure Blob built-in trigger, and example connection information.":::
+
+1. After the trigger information box appears, provide the necessary information.
+
+ For the **Blob path** property, enter the name of the folder that you want to monitor.
1. To find the folder name, open your storage account in the Azure portal.
To add a Blob trigger to a logic app workflow in single-tenant Azure Logic Apps,
1. Select your blob container. Find the name for the folder that you want to monitor.
- 1. Return to the workflow designer. In the trigger's **Blob Path** property, enter the path for the container, folder, or blob, based on whether you're checking for new blobs or changes to an existing blob. The syntax varies based on the check that you want to run and any filtering that you want to use:
+ 1. Return to the designer. In the **Blob path** property, enter the path for the container, folder, or blob, based on whether you're checking for new blobs or changes to an existing blob. The syntax varies based on the check that you want to run and any filtering that you want to use:
| Task | Path syntax | ||-|
To add a Blob trigger to a logic app workflow in single-tenant Azure Logic Apps,
| Check the root folder for changes to any blobs with names starting with a specific string, for example, **Sample-**. | **<*container-name*>/Sample-{name}** <br><br>**Important**: Make sure that you use **{name}** as a literal. | | Check a subfolder for a newly added blob. | **<*container-name*>/<*subfolder*>/{blobname}.{blobextension}** <br><br>**Important**: Make sure that you use **{blobname}.{blobextension}** as a literal. | | Check a subfolder for changes to a specific blob. | **<*container-name*>/<*subfolder*>/<*blob-name*>.<*blob-extension*>** |
- |||
For more syntax and filtering options, review [Azure Blob storage trigger for Azure Functions](../azure-functions/functions-bindings-storage-blob-trigger.md#blob-name-patterns). The following example shows a trigger setup that checks the root folder for a newly added blob:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-trigger-root-folder.png" alt-text="Screenshot showing the workflow designer for a Standard logic app workflow with an Azure Blob trigger set up for the root folder.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-root-folder.png" alt-text="Screenshot showing Standard workflow with Azure Blob built-in trigger set up for root folder.":::
The following example shows a trigger setup that checks a subfolder for changes to an existing blob:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-trigger-sub-folder-existing-blob.png" alt-text="Screenshot showing the workflow designer for a Standard logic app workflow with an Azure Blob trigger set up for a subfolder and specific blob.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-subfolder-existing-blob.png" alt-text="Screenshot showing Standard workflow with Azure Blob built-in trigger set up for a subfolder and specific blob.":::
+
+1. Add any other actions that your workflow requires.
-1. Continue creating your workflow by adding one or more actions.
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-1. On the designer toolbar, select **Save** to save your changes.
+<a name="managed-connector-trigger"></a>
+
+#### Managed connector trigger
+
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. On the designer, select **Choose an operation**. Under the search box, select **Azure**.
+
+1. In the search box, enter **Azure blob**.
+
+1. From the triggers list, select the trigger that you want.
+
+ This example continues with the trigger named **When a blob is added or modified (properties only)**.
+
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-add-managed-trigger.png" alt-text="Screenshot showing Azure portal, Standard logic app workflow designer, and Azure Blob Storage managed trigger selected.":::
+
+1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**.
+
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for your connection |
+ | **Authentication Type** | Yes | The [authentication type](../storage/common/authorize-data-access.md) for your storage account. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
+
+ For example, this connection uses access key authentication and provides the access key value for the storage account along with the following property values:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. |
+ | **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Show**. Copy and save the primary key value. |
+
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-trigger-create-connection.png" alt-text="Screenshot showing Standard workflow, Azure Blob managed trigger, and example connection information.":::
+
+1. After the trigger information box appears, provide the necessary information.
+
+ For the **Container** property value, select the folder icon to browse for your blob storage container. Or, enter the path manually using the syntax **/<*container-name*>**, for example:
+
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-trigger.png" alt-text="Screenshot showing Azure Blob Storage managed trigger with parameters configuration.":::
+
+1. To add other properties available for this trigger, open the **Add new parameter list** and select those properties. For more information, review [Azure Blob Storage managed connector trigger properties](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)-(v2)).
+
+1. Add any other actions that your workflow requires.
+
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
To add a Blob trigger to a logic app workflow in single-tenant Azure Logic Apps,
## Add a Blob action
-In Azure Logic Apps, an [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) is a step in your workflow that follows a trigger or another action.
+A Consumption logic app workflow can use only the Azure Blob Storage managed connector. However, a Standard logic app workflow can use the Azure Blob Storage managed connector and the Azure blob built-in connector. Each version has multiple, but differently named actions. For example, both managed and built-in connector versions have their own actions to get file metadata and get file content.
+
+- Managed connector actions: These actions run in a Consumption or Standard workflow.
+
+- Built-in connector actions: These actions run only in a Standard workflow.
+
+The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use the following tools to create and edit logic app workflows:
+
+- Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md)
+
+- Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
### [Consumption](#tab/consumption)
-To add a Blob action to a logic app workflow in multi-tenant Azure Logic Apps, follow these steps:
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-1. In the [Azure portal](https://portal.azure.com), open your workflow in the designer.
+1. If your workflow is blank, add the trigger that your workflow requires.
-1. If your workflow is blank, add any trigger that you want.
+ This example uses the [**Recurrence** trigger](connectors-native-recurrence.md).
- This example starts with the [**Recurrence** trigger](connectors-native-recurrence.md).
+1. Under the trigger or action where you want to add the Blob action, select **New step**.
+
+ Or, to add an action between steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
+
+1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **Azure blob**.
+
+1. From the actions list, select the action that you want.
+
+ This example continues with the action named **Get blob content**.
+
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-add-action.png" alt-text="Screenshot showing Azure portal, Consumption workflow designer, and Azure Blob Storage action selected.":::
-1. Under the trigger or action where you want to add the Blob action, select **New step** or **Add an action**, if between steps. This example uses the built-in Azure Blob action.
+1. If prompted, provide the following information for your connection. When you're done, select **Create**.
-1. Under the designer search box, make sure that **All** is selected. In the search box, enter **Azure blob**. Select the Blob action that you want to use.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for your connection |
+ | **Authentication Type** | Yes | The [authentication type](../storage/common/authorize-data-access.md) for your storage account. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
- This example uses the action named **Get blob content**.
+ For example, this connection uses access key authentication and provides the access key value for the storage account along with the following property values:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-add.png" alt-text="Screenshot showing Consumption logic app in designer with available Blob actions.":::
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. |
+ | **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Show**. Copy and save the primary key value. |
-1. If you're prompted for connection details, [create a connection to your Azure Storage account](#connect-blob-storage-account).
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-create-connection.png" alt-text="Screenshot showing Consumption workflow, Azure Blob action, and example connection information.":::
-1. Provide the necessary information for the action.
+1. After the action information box appears, provide the necessary action information.
For example, in the **Get blob content** action, provide your storage account name. For the **Blob** property value, select the folder icon to browse for your storage container or folder. Or, enter the path manually.
To add a Blob action to a logic app workflow in multi-tenant Azure Logic Apps, f
The following example shows the action setup that gets the content from a blob in the root folder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-root-folder.png" alt-text="Screenshot showing Consumption logic app in designer with Blob action setup for root folder.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-root-folder.png" alt-text="Screenshot showing Consumption workflow with Blob action setup for root folder.":::
The following example shows the action setup that gets the content from a blob in the subfolder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-sub-folder.png" alt-text="Screenshot showing Consumption logic app in designer with Blob action setup for subfolder.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/consumption-action-sub-folder.png" alt-text="Screenshot showing Consumption workflow with Blob action setup for subfolder.":::
-1. Set up other action settings as needed.
+1. Add any other actions that your workflow requires.
+
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
### [Standard](#tab/standard)
-To add an Azure Blob action to a logic app workflow in single-tenant Azure Logic Apps, follow these steps:
+The steps to add and use an Azure Blob action differ based on whether you want to use the built-in connector or the managed, Azure-hosted connector.
-1. In the [Azure portal](https://portal.azure.com), open your workflow in the designer.
+- [**Built-in action**](#built-in-connector-action): Describes the steps to add a built-in action.
-1. If your workflow is blank, add any trigger that you want.
+- [**Managed action**](#managed-connector-action): Describes the steps to add a managed action.
- This example starts with the [**Recurrence** trigger](connectors-native-recurrence.md).
+<a name="built-in-connector-action"></a>
+
+#### Built-in connector action
-1. Under the trigger or action where you want to add the Blob action, select **Insert a new step** (**+**) > **Add an action**.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+
+1. If your workflow is blank, add the trigger that your workflow requires.
+
+ This example uses the [**Recurrence** trigger](connectors-native-recurrence.md).
+
+1. Under the trigger or action where you want to add the Blob action, select the plus sign (**+**), and then select **Add an action**.
+
+ Or, to add an action between steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**.
+
+1. On the **Add an action** pane, under the search box, select **Built-in**. In the search box, enter **Azure blob**.
+
+1. From the actions list, select the action that you want.
-1. On the designer, make sure that **Add an operation** is selected. In the **Add an action** pane that opens, under the **Choose an operation** search box, select either **Built-in** to find the **Azure Blob** *built-in* actions, or select **Azure** to find the **Azure Blob Storage** *managed connector* actions.
+ This example continues with the action named **Read blob content**, which only reads the blob content. To later view the content, add a different action that creates a file with the blob content using another connector. For example, you can add a OneDrive action that creates a file based on the blob content.
-1. In the search box, enter **Azure blob**. Select the Azure Blob action that you want to use.
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-add-built-in-action.png" alt-text="Screenshot showing Azure portal, Standard workflow designer, and Azure Blob built-in action selected.":::
- This example uses the action named **Reads Blob Content from Azure Storage**, which only reads the blob content. To later view the content, add a different action that creates a file with the blob content using another connector. For example, you can add a OneDrive action that creates a file based on the blob content.
+1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**.
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-action-add.png" alt-text="Screenshot showing the Azure portal and workflow designer with a Standard logic app workflow and the available Azure Blob Storage actions.":::
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for your connection |
+ | **Authentication type** | Yes | The [authentication type](../storage/common/authorize-data-access.md) for your storage account. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
+
+ For example, this connection uses connection string authentication and provides the connection string value for the storage account:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Storage account connection string** | Yes, <br>but only for connection string authentication | <*storage-account-connection-string*> | The connection string for your Azure storage account. <br><br>**Note**: To find the connection string, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Connection string** > **Show**. Copy and save the connection string for the primary key. |
-1. If you're prompted for connection details, [create a connection to your Azure Storage account](#connect-blob-storage-account).
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-trigger-create-connection.png" alt-text="Screenshot showing Standard workflow, Azure Blob built-in trigger, and example connection information.":::
-1. For the action, provide the necessary information, which includes the following values for the **Read Blob Content from Azure Storage** action:
+1. In the action information box, provide the necessary information.
+
+ For example, the **Read blob content** action requires the following property values:
| Property | Required | Description | |-|-|-|
- | **Container Name** | Yes | The name for the storage container that you want to use |
+ | **Container name** | Yes | The name for the storage container that you want to use |
| **Blob name** | Yes | The name or path for the blob that you want to use |
- ||||
The following example shows the information for a specific blob in the root folder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-action-root-folder.png" alt-text="Screenshot showing Standard logic app in designer with Blob action setup for root folder.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-action-root-folder.png" alt-text="Screenshot showing Standard workflow with Blob built-in action setup for root folder.":::
The following example shows the information for a specific blob in a subfolder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-action-subfolder.png" alt-text="Screenshot showing Standard logic app in designer with Blob action setup for subfolder.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-built-in-action-subfolder.png" alt-text="Screenshot showing Standard workflow with Blob built-in action setup for subfolder.":::
-1. Configure any other action settings as needed.
+1. Add any other actions that your workflow requires.
-1. On the designer toolbar, select **Save**.
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-1. Test your logic app to make sure your selected container contains a blob.
+<a name="managed-connector-action"></a>
-
+#### Managed connector action
-<a name="connect-blob-storage-account"></a>
+1. In the [Azure portal](https://portal.azure.com), open your workflow in the designer.
-## Connect to Azure Storage account
+1. If your workflow is blank, add any trigger that you want.
+ This example starts with the [**Recurrence** trigger](connectors-native-recurrence.md).
-### [Consumption](#tab/consumption)
+1. Under the trigger or action where you want to add the Blob action, select **New step**.
-Before you can configure your [Azure Blob Storage trigger](#add-trigger) or [Azure Blob Storage action](#add-action), you need to connect to your Azure Storage account.
+ Or, to add an action between steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
-Based on the [authentication type that your storage account requires](../storage/common/authorize-data-access.md), you have to provide a connection name and select the authentication type at a minimum.
+1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **Azure blob**.
-For example, if your storage account requires *access key* authorization, you have to provide the following information:
+1. From the actions list, select the Blob action that you want.
-| Property | Required | Value | Description |
-|-|-|-|-|
-| **Connection name** | Yes | <*connection-name*> | The name to use for your connection. |
-| **Authentication type** | Yes | - **Access Key** <br><br>- **Azure AD Integrated** <br><br>- **Logic Apps Managed Identity** | The authentication type to use for your connection. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
-| **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br><br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. |
-| **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br><br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **Show keys**. Copy and save one of the key values. |
-|||||
+ This example continues with the action named **Get blob content**.
-The following example shows how a connection using access key authentication might appear:
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-add-managed-action.png" alt-text="Screenshot showing Azure portal, Standard workflow designer, and Azure Blob Storage managed action selected.":::
+1. If prompted, provide the following information for your connection to your storage account. When you're done, select **Create**.
-> [!NOTE]
-> After you create your connection, if you have a different existing Azure Blob storage connection
-> that you want to use instead, select **Change connection** in the trigger or action details editor.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for your connection |
+ | **Authentication type** | Yes | The [authentication type](../storage/common/authorize-data-access.md) for your storage account. For more information, review [Authentication types for triggers and actions that support authentication - Secure access and data](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions). |
-If you have problems connecting to your storage account, review [how to access storage accounts behind firewalls](#access-storage-accounts-behind-firewalls).
+ For example, this connection uses access key authentication and provides the access key value for the storage account along with the following property values:
-### [Standard](#tab/standard)
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Azure Storage Account name** | Yes, <br>but only for access key authentication | <*storage-account-name*> | The name for the Azure storage account where your blob container exists. <br><br>**Note**: To find the storage account name, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys**. Under **Storage account name**, copy and save the name. |
+ | **Azure Storage Account Access Key** | Yes, <br>but only for access key authentication | <*storage-account-access-key*> | The access key for your Azure storage account. <br><br>**Note**: To find the access key, open your storage account resource in the Azure portal. In the resource menu, under **Security + networking**, select **Access keys** > **key1** > **Show**. Copy and save the primary key value. |
-Before you can configure your [Azure Blob trigger](#add-trigger) or [Azure Blob action](#add-action), you need to connect to your Azure Storage account. A connection requires the following properties:
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-action-create-connection.png" alt-text="Screenshot showing Standard workflow, Azure Blob Storage managed action, and example connection information.":::
-| Property | Required | Value | Description |
-|-|-|-|-|
-| **Connection name** | Yes | <*connection-name*> | The name to use for your connection. |
-| **Azure Blob Storage Connection String** | Yes | <*storage-account*> | Select your storage account from the list, or provide a string. <br><br><br><br>**Note**: To find the connection string, go to the storage account's page. In the navigation menu, under **Security + networking**, select **Access keys** > **Show keys**. Copy one of the available connection string values. |
-|||||
+1. After the action information box appears, provide the necessary information.
-To create an Azure Blob Storage connection from a logic app workflow in single-tenant Azure Logic Apps, follow these steps:
+ For example, in the **Get blob content** action, provide your storage account name. For the **Blob** property value, select the folder icon to browse for your storage container or folder. Or, enter the path manually.
-1. For **Connection name**, enter a name for your connection.
+ | Task | Blob path syntax |
+ |||
+ | Get the content from a specific blob in the root folder. | **/<*container-name*>/<*blob-name*>** |
+ | Get the content from a specific blob in a subfolder. | **/<*container-name*>/<*subfolder*>/<*blob-name*>** |
+ |||
-1. For **Azure Blob Storage Connection String**, enter the connection string for the storage account that you want to use.
+ The following example shows the action setup that gets the content from a blob in the root folder:
+
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-action-root-folder.png" alt-text="Screenshot showing Consumption logic app workflow designer with Blob action setup for root folder.":::
-1. Select **Create** to finish creating your connection.
+ The following example shows the action setup that gets the content from a blob in the subfolder:
- :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-connection-create.png" alt-text="Screenshot that shows the workflow designer with a Standard logic app workflow and a prompt to add a new connection for the Azure Blob Storage step.":::
+ :::image type="content" source="./media/connectors-create-api-azureblobstorage/standard-managed-action-sub-folder.png" alt-text="Screenshot showing Consumption logic app workflow designer with Blob action setup for subfolder.":::
-> [!NOTE]
-> After you create your connection, if you have a different existing Azure Blob storage connection
-> that you want to use instead, select **Change connection** in the trigger or action details editor.
+1. Add any other actions that your workflow requires.
-If you have problems connecting to your storage account, review [how to access storage accounts behind firewalls](#access-storage-accounts-behind-firewalls).
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
+<a name="built-in-connector-operations"></a>
+
+## Azure Blob built-in connector operations
+
+The Azure Blob built-in connector is available only for Standard logic app workflows and provides the following operations:
+
+| Trigger | Description |
+||-|
+| When a blob is added or updated | Start a logic app workflow when a blob is added or updated in your storage container. |
+
+| Action | Description |
+|--|-|
+| Check whether blob exists | Check whether the specified blob exists in the specified Azure storage container. |
+| Delete blob | Delete the specified blob from the specified Azure storage container. |
+| Get blob metadata using path | Get the metadata for the specified blob from the specified Azure storage container. |
+| Get container metadata using path | Get the metadata for the specified Azure storage container. |
+| Get blob SAS URI using path | Get the Shared Access Signature (SAS) URI for the specified blob in the specified Azure storage container. |
+| List all blobs using path | List all the blobs in the specified Azure storage container. |
+| List all containers using path or root path | List all the Azure storage containers in your Azure subscription. |
+| Read blob content | Read the content from the specified blob in the specified Azure storage container. |
+| Upload blob to storage container | Upload the specified blob to the specified Azure storage container. |
+ ## Access storage accounts behind firewalls You can add network security to an Azure storage account by [restricting access with a firewall and firewall rules](../storage/common/storage-network-security.md). However, this setup creates a challenge for Azure and other Microsoft services that need access to the storage account. Local communication in the data center abstracts the internal IP addresses, so just permitting traffic through IP addresses might not be enough to successfully allow communication across the firewall. Based on which Azure Blob Storage connector you use, the following options are available:
You can add network security to an Azure storage account by [restricting access
- To access storage accounts behind firewalls using the ISE-versioned Azure Blob Storage connector that's only available in an ISE-based logic app, review [Access storage accounts through trusted virtual network](#access-storage-accounts-through-trusted-virtual-network). -- To access storage accounts behind firewalls using the *built-in* Azure Blob Storage connector that's only available in Standard logic apps, review [Access storage accounts through VNet integration](#access-storage-accounts-through-vnet-integration).
+- To access storage accounts behind firewalls using the *built-in* Azure Blob Storage connector that's only available in Standard logic apps, review [Access storage accounts through virtual network integration](#access-storage-accounts-through-virtual-network-integration).
### Access storage accounts in other regions
To add your outbound IP addresses to the storage account firewall, follow these
- Your logic app and storage account exist in different regions.
- You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
+ You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
-### Access storage accounts through VNet integration
+### Access storage accounts through virtual network integration
- Your logic app and storage account exist in the same region.
- You can put the storage account in an Azure virtual network by creating a private endpoint, and then add that virtual network to the trusted virtual networks list. To give your logic app access to the storage account, you have to [Set up outbound traffic using VNet integration](../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md#set-up-outbound) to enable connecting to resources in a virtual network. You can then add the VNet to the storage account's trusted virtual networks list.
+ You can put the storage account in an Azure virtual network by creating a private endpoint, and then add that virtual network to the trusted virtual networks list. To give your logic app access to the storage account, you have to [Set up outbound traffic using virtual network integration](../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md#set-up-outbound) to enable connecting to resources in a virtual network. You can then add the virtual network to the storage account's trusted virtual networks list.
- Your logic app and storage account exist in different regions.
- You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
+ You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
### Access Blob Storage in same region with system-managed identities
To use managed identities in your logic app to access Blob Storage, follow these
1. [Enable support for the managed identity in your logic app](#enable-managed-identity-support). > [!NOTE]
-> Limitations for this solution:
>
-> - To authenticate your storage account connection, you have to set up a system-assigned managed identity.
+> This solution has the following limitations:
+>
+> To authenticate your storage account connection, you have to set up a system-assigned managed identity.
> A user-assigned managed identity won't work.
->
#### Configure storage account access
Next, complete the following steps:
} ```
+## Application Insights errors
+
+- **404** and **409** errors
+
+ If your Standard workflow uses an Azure Blob built-in action that adds a blob to your storage container, you might get **404** and **409** errors in Application Insights for failed requests. These errors are expected because the connector checks whether the blob file exists before adding the blob. The errors result when the file doesn't exist. Despite these errors, the built-in action successfully adds the blob.
+ ## Next steps
-[Connectors overview for Azure Logic Apps](apis-list.md)
+- [Managed connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+- [Built-in connectors in Azure Logic Apps](built-in.md)
connectors Connectors Create Api Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-cosmos-db.md
Previously updated : 05/02/2022 Last updated : 08/23/2022 tags: connectors # Process and create Azure Cosmos DB documents using Azure Logic Apps + From your workflow in Azure Logic Apps, you can connect to Azure Cosmos DB and work with documents by using the [Azure Cosmos DB connector](/connectors/documentdb/). This connector provides triggers and actions that your workflow can use for Azure Cosmos DB operations. For example, actions include creating or updating, reading, querying, and deleting documents. You can connect to Azure Cosmos DB from both **Logic App (Consumption)** and **Logic App (Standard)** resource types by using the [*managed connector*](managed.md) operations. For **Logic App (Standard)**, Azure Cosmos DB also provides [*built-in*](built-in.md) operations, which are currently in preview and offer different functionality, better performance, and higher throughput. For example, if you're working with the **Logic App (Standard)** resource type, you can use the built-in trigger to respond to changes in an Azure Cosmos DB container. You can combine Azure Cosmos DB operations with other actions and triggers in your logic app workflows to enable scenarios such as event sourcing and general data processing.
connectors Connectors Create Api Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-ftp.md
tags: connectors
# Connect to an FTP server from workflows in Azure Logic Apps + This article shows how to access your File Transfer Protocol (FTP) server from a workflow in Azure Logic Apps with the FTP connector. You can then create automated workflows that run when triggered by events in your FTP server or in other systems and run actions to manage files on your FTP server. For example, your workflow can start with an FTP trigger that monitors and responds to events on your FTP server. The trigger makes the outputs available to subsequent actions in your workflow. Your workflow can run FTP actions that create, send, receive, and manage files through your FTP server account using the following specific tasks:
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md
tags: connectors
# Connect to an IBM MQ server from a workflow in Azure Logic Apps + The MQ connector helps you connect your logic app workflows to an IBM MQ server that's either on premises or in Azure. You can then have your workflows receive and send messages stored in your MQ server. This article provides a get started guide to using the MQ connector by showing how to connect to your MQ server and add an MQ action to your workflow. For example, you can start by browsing a single message in a queue and then try other actions. This connector includes a Microsoft MQ client that communicates with a remote MQ server across a TCP/IP network. You can connect to the following IBM WebSphere MQ versions:
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md
Last updated 09/02/2022
# Schedule and run recurring workflows with the Recurrence trigger in Azure Logic Apps + To start and run your workflow on a schedule, you can use the generic Recurrence trigger as the first step. You can set a date, time, and time zone for starting the workflow and a recurrence for repeating that workflow. The following list includes some patterns that this trigger supports along with more advanced recurrences and complex schedules: * Run at a specific date and time, then repeat every *n* number of seconds, minutes, hours, days, weeks, or months.
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
tags: connectors
# Handle incoming or inbound HTTPS requests sent to workflows in Azure Logic Apps + To run your logic app workflow after receiving an HTTPS request from another service, you can start your workflow with the Request built-in trigger. Your workflow can then respond to the HTTPS request by using Response built-in action. The following list describes some example tasks that your workflow can perform when you use the Request trigger and Response action:
container-instances Container Instances Encrypt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-encrypt-data.md
The rest of the document covers the steps required to encrypt your ACI deploymen
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
-This article reviews two flows for encrypting data with a customer-managed key:
-* Encrypt data with a customer-managed key stored in a standard Azure Key Vault
-* Encrypt data with a customer-managed key stored in a network-protected Azure Key Vault with [Trusted Services](../key-vault/general/network-security.md) enabled.
-
-## Encrypt data with a customer-managed key stored in a standard Azure Key Vault
- ### Create Service Principal for ACI The first step is to ensure that your [Azure tenant](../active-directory/develop/quickstart-create-new-tenant.md) has a service principal assigned for granting permissions to the Azure Container Instances service.
az deployment group create --resource-group myResourceGroup --template-file depl
Within a few seconds, you should receive an initial response from Azure. Once the deployment completes, all data related to it persisted by the ACI service will be encrypted with the key you provided.
-## Encrypt data with a customer-managed key in a network protected Azure Key Vault with Trusted Services enabled
-
-### Create a Key Vault resource
-
-Create an Azure Key Vault using [Azure portal](../key-vault/general/quick-create-portal.md), [Azure CLI](../key-vault/general/quick-create-cli.md), or [Azure PowerShell](../key-vault/general/quick-create-powershell.md). To start, do not apply any network-limitations so we can add necessary keys to the vault. In subsequent steps, we will add network-limitations and enable trusted services.
-
-For the properties of your key vault, use the following guidelines:
-* Name: A unique name is required.
-* Subscription: Choose a subscription.
-* Under Resource Group, either choose an existing resource group, or create new and enter a resource group name.
-* In the Location pull-down menu, choose a location.
-* You can leave the other options to their defaults or pick based on additional requirements.
-
-> [!IMPORTANT]
-> When using customer-managed keys to encrypt an ACI deployment template, it is recommended that the following two properties be set on the key vault, Soft Delete and Do Not Purge. These properties are not enabled by default, but can be enabled using either PowerShell or Azure CLI on a new or existing key vault.
-
-### Generate a new key
-
-Once your key vault is created, navigate to the resource in Azure portal. On the left navigation menu of the resource blade, under Settings, click **Keys**. On the view for "Keys," click "Generate/Import" to generate a new key. Use any unique Name for this key, and any other preferences based on your requirements. Make sure to capture key name and version for subsequent steps.
-
-![Screenshot of key creation settings, PNG.](./media/container-instances-encrypt-data/generate-key.png)
-
-### Create a user-assigned managed identity for your container group
-Create an identity in your subscription using the [az identity create](/cli/azure/identity#az-identity-create) command. You can use the same resource group used to create the key vault, or use a different one.
-
-```azurecli-interactive
-az identity create \
- --resource-group myResourceGroup \
- --name myACIId
-```
-
-To use the identity in the following steps, use the [az identity show](/cli/azure/identity#az-identity-show) command to store the identity's service principal ID and resource ID in variables.
-
-```azurecli-interactive
-# Get service principal ID of the user-assigned identity
-spID=$(az identity show \
- --resource-group myResourceGroup \
- --name myACIId \
- --query principalId --output tsv)
-```
-
-### Set access policy
-
-Create a new access policy for allowing the user-assigned identity to access and unwrap your key for encryption purposes.
-
-```azurecli-interactive
-az keyvault set-policy \
- --name mykeyvault \
- --resource-group myResourceGroup \
- --object-id $spID \
- --key-permissions get unwrapKey
- ```
-
-### Modify Azure Key Vault's network permissions
-The following commands set up an Azure Firewall for your Azure Key Vault and allow Azure Trusted Services such as ACI access.
-
-```azurecli-interactive
-az keyvault update \
- --name mykeyvault \
- --resource-group myResourceGroup \
- --default-action Deny
- ```
-
-```azurecli-interactive
-az keyvault update \
- --name mykeyvault \
- --resource-group myResourceGroup \
- --bypass AzureServices
- ```
-
-### Modify your JSON deployment template
-
-> [!IMPORTANT]
-> Encrypting deployment data with a customer-managed key is available in the 2022-09-01 API version or newer. The 2022-09-01 API version is only available via ARM or REST. If you have any issues with this, please reach out to Azure Support.
-
-Once the key vault key and access policy are set up, add the following properties to your ACI deployment template. Learn more about deploying ACI resources with a template in the [Tutorial: Deploy a multi-container group using a Resource Manager template](./container-instances-multi-container-group.md).
-* Under `resources`, set `apiVersion` to `2022-09-01`.
-* Under the container group properties section of the deployment template, add an `encryptionProperties`, which contains the following values:
- * `vaultBaseUrl`: the DNS Name of your key vault. This can be found on the overview blade of the key vault resource in Portal
- * `keyName`: the name of the key generated earlier
- * `keyVersion`: the current version of the key. This can be found by clicking into the key itself (under "Keys" in the Settings section of your key vault resource)
- * `identity`: this is the resource URI of the Managed Identity instance created earlier
-* Under the container group properties, add a `sku` property with value `Standard`. The `sku` property is required in API version 2022-09-01.
-* Under resources, add the `identity` object required to use Managed Identity with ACI, which contains the following values:
- * `type`: the type of the identity being used (either user-assigned or system-assigned). This case will be set to "UserAssigned"
- * `userAssignedIdentities`: the resourceURI of the same user-assigned identity used above in the `encryptionProperties` object.
-
-The following template snippet shows these additional properties to encrypt deployment data:
-
-```json
-[...]
-"resources": [
- {
- "name": "[parameters('containerGroupName')]",
- "type": "Microsoft.ContainerInstance/containerGroups",
- "apiVersion": "2019-12-01",
- "location": "[resourceGroup().location]",
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/XXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACIId": {}
- }
- },
- "properties": {
- "encryptionProperties": {
- "vaultBaseUrl": "https://example.vault.azure.net",
- "keyName": "acikey",
- "keyVersion": "xxxxxxxxxxxxxxxx",
- "identity": "/subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/XXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACIId"
- },
- "sku": "Standard",
- "containers": {
- [...]
- }
- }
- }
-]
-```
-
-Following is a complete template, adapted from the template in [Tutorial: Deploy a multi-container group using a Resource Manager template](./container-instances-multi-container-group.md).
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "containerGroupName": {
- "type": "string",
- "defaultValue": "myContainerGroup",
- "metadata": {
- "description": "Container Group name."
- }
- }
- },
- "variables": {
- "container1name": "aci-tutorial-app",
- "container1image": "mcr.microsoft.com/azuredocs/aci-helloworld:latest",
- "container2name": "aci-tutorial-sidecar",
- "container2image": "mcr.microsoft.com/azuredocs/aci-tutorial-sidecar"
- },
- "resources": [
- {
- "name": "[parameters('containerGroupName')]",
- "type": "Microsoft.ContainerInstance/containerGroups",
- "apiVersion": "2022-09-01",
- "location": "[resourceGroup().location]",
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "/subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/XXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACIId": {}
- }
- },
- "properties": {
- "encryptionProperties": {
- "vaultBaseUrl": "https://example.vault.azure.net",
- "keyName": "acikey",
- "keyVersion": "xxxxxxxxxxxxxxxx",
- "identity": "/subscriptions/XXXXXXXXXXXXXXXXXXXXXX/resourcegroups/XXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACIId"
- },
- "sku": "Standard",
- "containers": [
- {
- "name": "[variables('container1name')]",
- "properties": {
- "image": "[variables('container1image')]",
- "resources": {
- "requests": {
- "cpu": 1,
- "memoryInGb": 1.5
- }
- },
- "ports": [
- {
- "port": 80
- },
- {
- "port": 8080
- }
- ]
- }
- },
- {
- "name": "[variables('container2name')]",
- "properties": {
- "image": "[variables('container2image')]",
- "resources": {
- "requests": {
- "cpu": 1,
- "memoryInGb": 1.5
- }
- }
- }
- }
- ],
- "osType": "Linux",
- "ipAddress": {
- "type": "Public",
- "ports": [
- {
- "protocol": "tcp",
- "port": "80"
- },
- {
- "protocol": "tcp",
- "port": "8080"
- }
- ]
- }
- }
- }
- ],
- "outputs": {
- "containerIPv4Address": {
- "type": "string",
- "value": "[reference(resourceId('Microsoft.ContainerInstance/containerGroups/', parameters('containerGroupName'))).ipAddress.ip]"
- }
- }
-}
-```
-
-### Deploy your resources
-
-If you created and edited the template file on your desktop, you can upload it to your Cloud Shell directory by dragging the file into it.
-
-Create a resource group with the [az group create][az-group-create] command.
-
-```azurecli-interactive
-az group create --name myResourceGroup --location eastus
-```
-
-Deploy the template with the [az deployment group create][az-deployment-group-create] command.
-
-```azurecli-interactive
-az deployment group create --resource-group myResourceGroup --template-file deployment-template.json
-```
-
-Within a few seconds, you should receive an initial response from Azure. Once the deployment completes, all data related to it persisted by the ACI service will be encrypted with the key you provided.
<!-- LINKS - Internal --> [az-group-create]: /cli/azure/group#az_group_create [az-deployment-group-create]: /cli/azure/deployment/group/#az_deployment_group_create
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
Azure Container Instances supports both types of managed Azure identities: user-
To use a managed identity, the identity must be granted access to one or more Azure service resources (such as a web app, a key vault, or a storage account) in the subscription. Using a managed identity in a running container is similar to using an identity in an Azure VM. See the VM guidance for using a [token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md), [Azure PowerShell or Azure CLI](../active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md), or the [Azure SDKs](../active-directory/managed-identities-azure-resources/how-to-use-vm-sdk.md).
-### Limitations
-
-* Currently you can't use a managed identity in a container group deployed to a virtual network.
- [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] - This article requires version 2.0.49 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
container-instances Container Instances Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-region-availability.md
The following regions and maximum resources are available to container groups wi
| Australia East | 4 | 16 | 4 | 16 | 50 | N/A | Y | | Australia Southeast | 4 | 14 | N/A | N/A | 50 | N/A | N | | Brazil South | 4 | 16 | 2 | 8 | 50 | N/A | Y |
+| Australia Southeast | 4 | 14 | 16 | 50 | 50 | N/A | N |
+| Brazil South | 4 | 16 | 2 | 16 | 50 | N/A | Y |
| Canada Central | 4 | 16 | 4 | 16 | 50 | N/A | N | | Canada East | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| Canada East | 4 | 16 | 16 | 50 | 50 | N/A | N |
| Central India | 4 | 16 | 4 | 4 | 50 | V100 | N | | Central US | 4 | 16 | 4 | 16 | 50 | N/A | Y | | East Asia | 4 | 16 | 4 | 16 | 50 | N/A | N |
The following regions and maximum resources are available to container groups wi
| East US 2 | 4 | 16 | 4 | 16 | 50 | N/A | Y | | France Central | 4 | 16 | 4 | 16 | 50 | N/A | Y| | Germany West Central | 4 | 16 | N/A | N/A | 50 | N/A | Y |
+| Germany West Central | 4 | 16 | 16 | 50 | 50 | N/A | Y |
| Japan East | 4 | 16 | 4 | 16 | 50 | N/A | Y | | Japan West | 4 | 16 | N/A | N/A | 50 | N/A | N | | Jio India West | 4 | 16 | N/A | N/A | 50 | N/A | N | | Korea Central | 4 | 16 | N/A | N/A | 50 | N/A | N | | North Central US | 2 | 3.5 | 4 | 16 | 50 | K80, P100, V100 | N |
+| Japan West | 4 | 16 | 16 | 50 | 50 | N/A | N |
+| Jio India West | 4 | 16 | 16 | 50 | 50 | N/A | N |
+| Korea Central | 4 | 16 | 16 | 50 | 50 | N/A | N |
+| North Central US | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | N |
| North Europe | 4 | 16 | 4 | 16 | 50 | K80 | Y | | Norway East | 4 | 16 | N/A | N/A | 50 | N/A | N | | Norway West | 4 | 16 | N/A | N/A | 50 | N/A | N | | South Africa North | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| Norway East | 4 | 16 | 4 | 16 | 50 | N/A | N |
+| Norway West | 4 | 16 | 4 | 16 | 50 | N/A | N |
+| South Africa North | 4 | 16 | 4 | 16 | 50 | N/A | N |
| South Central US | 4 | 16 | 4 | 16 | 50 | V100 | Y | | Southeast Asia | 4 | 16 | 4 | 16 | 50 | P100, V100 | Y | | South India | 4 | 16 | N/A | N/A | 50 | K80 | N | | Sweden Central | 4 | 16 | N/A | N/A | 50 | N/A | N | | Sweden South | 4 | 16 | N/A | N/A | 50 | N/A | N | | Switzerland North | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| South India | 4 | 16 | 4 | 16 | 50 | K80 | N |
+| Sweden Central | 4 | 16 | 4 | 16 | 50 | N/A | N |
+| Sweden South | 4 | 16 | 4 | 16 | 50 | N/A | N |
+| Switzerland North | 4 | 16 | 4 | 16 | 50 | N/A | N |
| Switzerland West | 4 | 16 | N/A | N/A | 50 | N/A | N | | UK South | 4 | 16 | 4 | 16 | 50 | N/A | Y| | UK West | 4 | 16 | N/A | N/A | 50 | N/A | N | | UAE North | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| UK West | 4 | 16 | 4 | 16 | 50 | N/A | N |
+| UAE North | 4 | 16 | 4 | 16 | 50 | N/A | N |
| West Central US| 4 | 16 | 4 | 16 | 50 | N/A | N | | West Europe | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | | West India | 4 | 16 | N/A | N/A | 50 | N/A | N | | West US | 4 | 16 | 4 | 16 | 50 | N/A | N | | West US 2 | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | | West US 3 | 4 | 16 | N/A | N/A | 50 | N/A | N |
+| West US 3 | 4 | 16 | 4 | 16 | 50 | N/A | N |
The following maximum resources are available to a container group deployed with [GPU resources](container-instances-gpu.md) (preview).
container-instances Container Instances Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-troubleshooting.md
On initial creation, Windows containers may have no inbound or outbound connecti
### Cannot connect to underlying Docker API or run privileged containers
-Azure Container Instances does not expose direct access to the underlying infrastructure that hosts container groups. This includes access to the Docker API running on the container's host and running privileged containers. If you require Docker interaction, check the [REST reference documentation](/rest/api/container-instances/) to see what the ACI API supports. If there is something missing, submit a request on the [ACI feedback forums](https://aka.ms/aci/feedback).
+Azure Container Instances does not expose direct access to the underlying infrastructure that hosts container groups. This includes access to the container runtime, orchestration technology, and running privileged container operations. To see what operations are supported by ACI, check the [REST reference documentation](/rest/api/container-instances/). If there is something missing, submit a request on the [ACI feedback forums](https://aka.ms/aci/feedback).
### Container group IP address may not be accessible due to mismatched ports
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-virtual-network-concepts.md
Container groups deployed into an Azure virtual network enable scenarios like:
* Currently, only Linux containers are supported in a container group deployed to a virtual network. * To deploy container groups to a subnet, the subnet can't contain other resource types. Remove all existing resources from an existing subnet prior to deploying container groups to it, or create a new subnet. * To deploy container groups to a subnet, the subnet and the container group must be on the same Azure subscription.
-* You can't use a [managed identity](container-instances-managed-identity.md) in a container group deployed to a virtual network.
* You can't enable a [liveness probe](container-instances-liveness-probe.md) or [readiness probe](container-instances-readiness-probe.md) in a container group deployed to a virtual network. * Due to the additional networking resources involved, deployments to a virtual network are typically slower than deploying a standard container instance. * Outbound connection to port 25 is not supported at this time.
In the following diagram, several container groups have been deployed to a subne
<!-- LINKS - Internal --> [az-container-create]: /cli/azure/container#az_container_create
-[az-network-profile-list]: /cli/azure/network/profile#az_network_profile_list
+[az-network-profile-list]: /cli/azure/network/profile#az_network_profile_list
container-instances Using Azure Container Registry Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/using-azure-container-registry-mi.md
**Azure CLI**: The command-line examples in this article use the [Azure CLI](/cli/azure/) and are formatted for the Bash shell. You can [install the Azure CLI](/cli/azure/install-azure-cli) locally, or use the [Azure Cloud Shell][cloud-shell-bash]. ## Limitations
-* Container groups running in Azure Virtual Networks don't support managed identity authentication image pulls with ACR.
- * Windows containers don't support managed identity-authenticated image pulls with ACR. * The Azure container registry must have [Public Access set to either 'Select networks' or 'None'](../container-registry/container-registry-access-selected-networks.md). To set the Azure container registry's Public Access to 'All networks', visit ACI's article on [how to authenticate with ACR with service principal based authentication](container-instances-using-azure-container-registry.md).
data-factory Better Understand Different Integration Runtime Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/better-understand-different-integration-runtime-charges.md
In this article, we'll illustrate the pricing model using different integration
The integration runtime, which is serverless in Azure and self-hosted in hybrid scenarios, provides the compute resources used to execute the activities in a pipeline. Integration runtime charges are prorated by the minute and rounded up. > [!NOTE]
-> The prices used in these examples below are hypothetical and are not intended to imply actual pricing.
+> The prices used in this example below are hypothetical and are not intended to imply actual pricing.
## Azure integration runtime
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
Settings specific to Azure SQL Database are available in the **Source Options**
**Incremental date column**: When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table.
+**Enable native change data capture(Preview)**: Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](https://learn.microsoft.com/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](https://learn.microsoft.com/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on Azure SQL DB before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture).
+ **Start reading from beginning**: Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on. ### Sink transformation
When you copy data from/to Azure SQL Database with [Always Encrypted](/sql/relat
>[!NOTE] > Currently, Azure SQL Database [**Always Encrypted**](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&preserve-view=true) is only supported for source transformation in mapping data flows.
+## Native change data capture
+
+Azure Data Factory can support native change data capture capabilities for SQL Server, Azure SQL DB and Azure SQL MI. The changed data including row insert, update and deletion in SQL stores can be automatically detected and extracted by ADF mapping dataflow. With the no code experience in mapping dataflow, users can easily achieve data replication scenario from SQL stores by appending a database as destination store. What is more, users can also compose any data transform logic in between to achieve incremental ETL scenario from SQL stores.
+
+Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can be recorded by ADF for you to get changed data from the last run automatically. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run. If you do want to change the pipeline name or activity name but still keep the checkpoint to get changed data from the last run automatically, please use your own Checkpoint key in dataflow activity to achieve that.
+
+When you debug the pipeline, this feature works the same. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline. At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now on.
+
+In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured from the previous checkpoint of your selected pipeline run.
+
+### Example 1:
+
+When you directly chain a source transform referenced to SQL CDC enabled dataset with a sink transform referenced to a database in a mapping dataflow, the changes happened on SQL source will be automatically applied to the target database, so that you will easily get data replication scenario between databases. You can use update method in sink transform to select whether you want to allow insert, allow update or allow delete on target database. The example script in mapping dataflow is as below.
+
+```json
+source(output(
+ id as integer,
+ name as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ enableNativeCdc: true,
+ netChanges: true,
+ skipInitialLoad: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ format: 'table') ~> source1
+source1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ deletable:true,
+ insertable:true,
+ updateable:true,
+ upsertable:true,
+ keys:['id'],
+ format: 'table',
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true,
+ errorHandlingOption: 'stopOnFirstError') ~> sink1
+```
+
+### Example 2:
+
+If you want to enable ETL scenario instead of data replication between database via SQL CDC, you can use expressions in mapping dataflow including isInsert(1), isUpdate(1) and isDelete(1) to differentiate the rows with different operation types. The following is one of the example scripts for mapping dataflow on deriving one column with the value: 1 to indicate inserted rows, 2 to indicate updated rows and 3 to indicate deleted rows for downstream transforms to process the delta data.
+
+```json
+source(output(
+ id as integer,
+ name as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ enableNativeCdc: true,
+ netChanges: true,
+ skipInitialLoad: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ format: 'table') ~> source1
+source1 derive(operationType = iif(isInsert(1), 1, iif(isUpdate(1), 2, 3))) ~> derivedColumn1
+derivedColumn1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> sink1
+```
+
+### Known limitation:
+
+* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](https://learn.microsoft.com/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
++ ## Next steps For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-managed-instance.md
The below table lists the properties supported by Azure SQL Managed Instance sou
| Query | If you select Query as input, specify a SQL query to fetch data from source, which overrides any table you specify in dataset. Using queries is a great way to reduce rows for testing or lookups.<br><br>**Order By** clause is not supported, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table that you can use in data flow.<br>Query example: `Select * from MyTable where customerId > 1000 and customerId < 2000`| No | String | query | | Batch size | Specify a batch size to chunk large data into reads. | No | Integer | batchSize | | Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel |
+| Enable incremental extract | Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed. | No | - |- |
+| Incremental date column | When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table. | No | - |- |
+| Enable native change data capture(Preview) | Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](https://learn.microsoft.com/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](https://learn.microsoft.com/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on Azure SQL MI before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture). | No | - |- |
+| Start reading from beginning | Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on. | No | - |- |
+ > [!TIP] > The [common table expression (CTE)](/sql/t-sql/queries/with-common-table-expression-transact-sql?view=sql-server-ver15&preserve-view=true) in SQL is not supported in the mapping data flow **Query** mode, because the prerequisite of using this mode is that queries can be used in the SQL query FROM clause but CTEs cannot do this.
When you copy data from/to SQL Managed Instance with [Always Encrypted](/sql/rel
>[!NOTE] >Currently, SQL Managed Instance [**Always Encrypted**](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&preserve-view=true) is only supported for source transformation in mapping data flows. +
+## Native change data capture
+
+Azure Data Factory can support native change data capture capabilities for SQL Server, Azure SQL DB and Azure SQL MI. The changed data including row insert, update and deletion in SQL stores can be automatically detected and extracted by ADF mapping dataflow. With the no code experience in mapping dataflow, users can easily achieve data replication scenario from SQL stores by appending a database as destination store. What is more, users can also compose any data transform logic in between to achieve incremental ETL scenario from SQL stores.
+
+Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can be recorded by ADF for you to get changed data from the last run automatically. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run. If you do want to change the pipeline name or activity name but still keep the checkpoint to get changed data from the last run automatically, please use your own Checkpoint key in dataflow activity to achieve that.
+
+When you debug the pipeline, this feature works the same. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline. At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now on.
+
+In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured from the previous checkpoint of your selected pipeline run.
+
+### Example 1:
+
+When you directly chain a source transform referenced to SQL CDC enabled dataset with a sink transform referenced to a database in a mapping dataflow, the changes happened on SQL source will be automatically applied to the target database, so that you will easily get data replication scenario between databases. You can use update method in sink transform to select whether you want to allow insert, allow update or allow delete on target database. The example script in mapping dataflow is as below.
+
+```json
+source(output(
+ id as integer,
+ name as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ enableNativeCdc: true,
+ netChanges: true,
+ skipInitialLoad: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ format: 'table') ~> source1
+source1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ deletable:true,
+ insertable:true,
+ updateable:true,
+ upsertable:true,
+ keys:['id'],
+ format: 'table',
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true,
+ errorHandlingOption: 'stopOnFirstError') ~> sink1
+```
+
+### Example 2:
+
+If you want to enable ETL scenario instead of data replication between database via SQL CDC, you can use expressions in mapping dataflow including isInsert(1), isUpdate(1) and isDelete(1) to differentiate the rows with different operation types. The following is one of the example scripts for mapping dataflow on deriving one column with the value: 1 to indicate inserted rows, 2 to indicate updated rows and 3 to indicate deleted rows for downstream transforms to process the delta data.
+
+```json
+source(output(
+ id as integer,
+ name as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ enableNativeCdc: true,
+ netChanges: true,
+ skipInitialLoad: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ format: 'table') ~> source1
+source1 derive(operationType = iif(isInsert(1), 1, iif(isUpdate(1), 2, 3))) ~> derivedColumn1
+derivedColumn1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> sink1
+```
+
+### Known limitation:
+
+* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](https://learn.microsoft.com/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
++ ## Next steps For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md
The below table lists the properties supported by SQL Server source. You can edi
| Query | If you select Query as input, specify a SQL query to fetch data from source, which overrides any table you specify in dataset. Using queries is a great way to reduce rows for testing or lookups.<br><br>**Order By** clause is not supported, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table that you can use in data flow.<br>Query example: `Select * from MyTable where customerId > 1000 and customerId < 2000`| No | String | query | | Batch size | Specify a batch size to chunk large data into reads. | No | Integer | batchSize | | Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel |
+| Enable incremental extract | Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed. | No | - |- |
+| Incremental date column | When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table. | No | - |- |
+| Enable native change data capture(Preview) | Use this option to tell ADF to only process delta data captured by [SQL change data capture technology](https://learn.microsoft.com/sql/relational-databases/track-changes/about-change-data-capture-sql-server) since the last time that the pipeline executed. With this option, the delta data including row insert, update and deletion will be loaded automatically without any incremental date column required. You need to [enable change data capture](https://learn.microsoft.com/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server) on SQL Server before using this option in ADF. For more information about this option in ADF, see [native change data capture](#native-change-data-capture). | No | - |- |
+| Start reading from beginning | Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on. | No | - |- |
+++ > [!TIP] > The [common table expression (CTE)](/sql/t-sql/queries/with-common-table-expression-transact-sql?view=sql-server-ver15&preserve-view=true) in SQL is not supported in the mapping data flow **Query** mode, because the prerequisite of using this mode is that queries can be used in the SQL query FROM clause but CTEs cannot do this.
When you copy data from/to SQL Server with [Always Encrypted](/sql/relational-da
>[!NOTE] >Currently, SQL Server [**Always Encrypted**](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&preserve-view=true) is only supported for source transformation in mapping data flows. +
+## Native change data capture
+
+Azure Data Factory can support native change data capture capabilities for SQL Server, Azure SQL DB and Azure SQL MI. The changed data including row insert, update and deletion in SQL stores can be automatically detected and extracted by ADF mapping dataflow. With the no code experience in mapping dataflow, users can easily achieve data replication scenario from SQL stores by appending a database as destination store. What is more, users can also compose any data transform logic in between to achieve incremental ETL scenario from SQL stores.
+
+Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can be recorded by ADF for you to get changed data from the last run automatically. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run. If you do want to change the pipeline name or activity name but still keep the checkpoint to get changed data from the last run automatically, please use your own Checkpoint key in dataflow activity to achieve that.
+
+When you debug the pipeline, this feature works the same. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline. At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now on.
+
+In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured from the previous checkpoint of your selected pipeline run.
+
+### Example 1:
+
+When you directly chain a source transform referenced to SQL CDC enabled dataset with a sink transform referenced to a database in a mapping dataflow, the changes happened on SQL source will be automatically applied to the target database, so that you will easily get data replication scenario between databases. You can use update method in sink transform to select whether you want to allow insert, allow update or allow delete on target database. The example script in mapping dataflow is as below.
+
+```json
+source(output(
+ id as integer,
+ name as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ enableNativeCdc: true,
+ netChanges: true,
+ skipInitialLoad: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ format: 'table') ~> source1
+source1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ deletable:true,
+ insertable:true,
+ updateable:true,
+ upsertable:true,
+ keys:['id'],
+ format: 'table',
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true,
+ errorHandlingOption: 'stopOnFirstError') ~> sink1
+```
+
+### Example 2:
+
+If you want to enable ETL scenario instead of data replication between database via SQL CDC, you can use expressions in mapping dataflow including isInsert(1), isUpdate(1) and isDelete(1) to differentiate the rows with different operation types. The following is one of the example scripts for mapping dataflow on deriving one column with the value: 1 to indicate inserted rows, 2 to indicate updated rows and 3 to indicate deleted rows for downstream transforms to process the delta data.
+
+```json
+source(output(
+ id as integer,
+ name as string
+ ),
+ allowSchemaDrift: true,
+ validateSchema: false,
+ enableNativeCdc: true,
+ netChanges: true,
+ skipInitialLoad: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ format: 'table') ~> source1
+source1 derive(operationType = iif(isInsert(1), 1, iif(isUpdate(1), 2, 3))) ~> derivedColumn1
+derivedColumn1 sink(allowSchemaDrift: true,
+ validateSchema: false,
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> sink1
+```
+
+### Known limitation:
+
+* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](https://learn.microsoft.com/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
+ ## Troubleshoot connection issues 1. Configure your SQL Server instance to accept remote connections. Start **SQL Server Management Studio**, right-click **server**, and select **Properties**. Select **Connections** from the list, and select the **Allow remote connections to this server** check box.
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-expression-language-functions.md
Following on Example 3, this example finds the value in the
`xpath(xml(body('Http')), 'string(/*[name()=\"file\"]/*[name()=\"location\"])')` And returns this result: `"Paris"`
+
+> [!NOTE]
+> One can add comments to data flow expressions, but not in pipeline expressions.
## Next steps For a list of system variables you can use in expressions, see [System variables](control-flow-system-variables.md).
data-factory Data Flow Parse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-parse.md
Use the expression builder to set the source for your parsing. This can be as si
* Expression: ```(level as string, registration as long)``` * Source Nested JSON data: ```{"car" : {"model" : "camaro", "year" : 1989}, "color" : "white", "transmission" : "v8"}```
-* Expression: ```(car as (model as string, year as integer), color as string, transmission as string)```
+ * Expression: ```(car as (model as string, year as integer), color as string, transmission as string)```
* Source XML data: ```<Customers><Customer>122</Customer><CompanyName>Great Lakes Food Market</CompanyName></Customers>``` * Expression: ```(Customers as (Customer as integer, CompanyName as string))``` * Source XML with Attribute data: ```<cars><car model="camaro"><year>1989</year></car></cars>```
-* Expression: ```(cars as (car as ({@model} as string, year as integer)))```
+ * Expression: ```(cars as (car as ({@model} as string, year as integer)))```
+ * Note: If you run into errors extracting attributes (i.e. @model) from a complex type, a workaround is to convert the complex type to a string, remove the @ symbol (i.e. replace(toString(your_xml_string_parsed_column_name.cars.car),'@','') ), and then use the parse JSON transformation activity.
data-factory Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/plan-manage-costs.md
Use the [ADF pricing calculator](https://azure.microsoft.com/pricing/calculator/
One of the commonly asked questions for the pricing calculator is what values should be used as inputs. During the proof-of-concept phase, you can conduct trial runs using sample datasets to understand the consumption for various ADF meters. Then based on the consumption for the sample dataset, you can project out the consumption for the full dataset and operationalization schedule. > [!NOTE]
-> The prices used in these examples below are hypothetical and are not intended to imply actual pricing.
+> The prices used in this example below are hypothetical and are not intended to imply actual pricing.
For example, letΓÇÖs say you need to move 1 TB of data daily from AWS S3 to Azure Data Lake Gen2. You can perform POC of moving 100 GB of data to measure the data ingestion throughput and understand the corresponding billing consumption.
Budgets can be created with filters for specific resources or services in Azure
## Export cost data
-You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
## Next steps
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-concepts.md
Previously updated : 08/18/2022 Last updated : 09/22/2022 # Understanding Data Factory pricing through examples
Last updated 08/18/2022
This article explains and demonstrates the Azure Data Factory pricing model with detailed examples. You can also refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
-> [!NOTE]
-> The prices used in these examples below are hypothetical and are not intended to imply actual pricing.
+For more details about pricing in Azure Data Factory, refer to the [Data Pipeline Pricing and FAQ](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/).
-## Copy data from AWS S3 to Azure Blob storage hourly
+## Pricing examples
+The prices used in these examples below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
-In this scenario, you want to copy data from AWS S3 to Azure Blob storage on an hourly schedule.
-
-To accomplish the scenario, you need to create a pipeline with the following items:
-
-1. A copy activity with an input dataset for the data to be copied from AWS S3.
-
-2. An output dataset for the data on Azure Storage.
-
-3. A schedule trigger to execute the pipeline every hour.
-
- :::image type="content" source="media/pricing-concepts/scenario1.png" alt-text="Diagram shows a pipeline with a schedule trigger. In the pipeline, copy activity flows to an input dataset, which flows to an A W S S3 linked service and copy activity also flows to an output dataset, which flows to an Azure Storage linked service.":::
-
-| **Operations** | **Types and Units** |
-| | |
-| Create Linked Service | 2 Read/Write entity |
-| Create Datasets | 4 Read/Write entities (2 for dataset creation, 2 for linked service references) |
-| Create Pipeline | 3 Read/Write entities (1 for pipeline creation, 2 for dataset references) |
-| Get Pipeline | 1 Read/Write entity |
-| Run Pipeline | 2 Activity runs (1 for trigger run, 1 for activity runs) |
-| Copy Data Assumption: execution time = 10 min | 10 \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
-| Monitor Pipeline Assumption: Only 1 run occurred | 2 Monitoring run records retrieved (1 for pipeline run, 1 for activity run) |
-
-**Total Scenario pricing: $0.16811**
--- Data Factory Operations = **$0.0001**
- - Read/Write = 10\*0.00001 = $0.0001 [1 R/W = $0.50/50000 = 0.00001]
- - Monitoring = 2\*0.000005 = $0.00001 [1 Monitoring = $0.25/50000 = 0.000005]
-- Pipeline Orchestration &amp; Execution = **$0.168**
- - Activity Runs = 0.001\*2 = $0.002 [1 run = $1/1000 = 0.001]
- - Data Movement Activities = $0.166 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
-
-## Copy data and transform with Azure Databricks hourly
-
-In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform the data with Azure Databricks on an hourly schedule.
-
-To accomplish the scenario, you need to create a pipeline with the following items:
-
-1. One copy activity with an input dataset for the data to be copied from AWS S3, and an output dataset for the data on Azure storage.
-2. One Azure Databricks activity for the data transformation.
-3. One schedule trigger to execute the pipeline every hour.
--
-| **Operations** | **Types and Units** |
-| | |
-| Create Linked Service | 3 Read/Write entity |
-| Create Datasets | 4 Read/Write entities (2 for dataset creation, 2 for linked service references) |
-| Create Pipeline | 3 Read/Write entities (1 for pipeline creation, 2 for dataset references) |
-| Get Pipeline | 1 Read/Write entity |
-| Run Pipeline | 3 Activity runs (1 for trigger run, 2 for activity runs) |
-| Copy Data Assumption: execution time = 10 min | 10 \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
-| Monitor Pipeline Assumption: Only 1 run occurred | 3 Monitoring run records retrieved (1 for pipeline run, 2 for activity run) |
-| Execute Databricks activity Assumption: execution time = 10 min | 10 min External Pipeline Activity Execution |
-
-**Total Scenario pricing: $0.16916**
--- Data Factory Operations = **$0.00012**
- - Read/Write = 11\*0.00001 = $0.00011 [1 R/W = $0.50/50000 = 0.00001]
- - Monitoring = 3\*0.000005 = $0.00001 [1 Monitoring = $0.25/50000 = 0.000005]
-- Pipeline Orchestration &amp; Execution = **$0.16904**
- - Activity Runs = 0.001\*3 = $0.003 [1 run = $1/1000 = 0.001]
- - Data Movement Activities = $0.166 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
- - External Pipeline Activity = $0.000041 (Prorated for 10 minutes of execution time. $0.00025/hour on Azure Integration Runtime)
-
-## Copy data and transform with dynamic parameters hourly
-
-In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform with Azure Databricks (with dynamic parameters in the script) on an hourly schedule.
-
-To accomplish the scenario, you need to create a pipeline with the following items:
-
-1. One copy activity with an input dataset for the data to be copied from AWS S3, an output dataset for the data on Azure storage.
-2. One Lookup activity for passing parameters dynamically to the transformation script.
-3. One Azure Databricks activity for the data transformation.
-4. One schedule trigger to execute the pipeline every hour.
--
-| **Operations** | **Types and Units** |
-| | |
-| Create Linked Service | 3 Read/Write entity |
-| Create Datasets | 4 Read/Write entities (2 for dataset creation, 2 for linked service references) |
-| Create Pipeline | 3 Read/Write entities (1 for pipeline creation, 2 for dataset references) |
-| Get Pipeline | 1 Read/Write entity |
-| Run Pipeline | 4 Activity runs (1 for trigger run, 3 for activity runs) |
-| Copy Data Assumption: execution time = 10 min | 10 \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
-| Monitor Pipeline Assumption: Only 1 run occurred | 4 Monitoring run records retrieved (1 for pipeline run, 3 for activity run) |
-| Execute Lookup activity Assumption: execution time = 1 min | 1 min Pipeline Activity execution |
-| Execute Databricks activity Assumption: execution time = 10 min | 10 min External Pipeline Activity execution |
-
-**Total Scenario pricing: $0.17020**
--- Data Factory Operations = **$0.00013**
- - Read/Write = 11\*0.00001 = $0.00011 [1 R/W = $0.50/50000 = 0.00001]
- - Monitoring = 4\*0.000005 = $0.00002 [1 Monitoring = $0.25/50000 = 0.000005]
-- Pipeline Orchestration &amp; Execution = **$0.17007**
- - Activity Runs = 0.001\*4 = $0.004 [1 run = $1/1000 = 0.001]
- - Data Movement Activities = $0.166 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
- - Pipeline Activity = $0.00003 (Prorated for 1 minute of execution time. $0.005/hour on Azure Integration Runtime)
- - External Pipeline Activity = $0.000041 (Prorated for 10 minutes of execution time. $0.00025/hour on Azure Integration Runtime)
-
-## Run SSIS packages on Azure-SSIS integration runtime
-
-Azure-SSIS integration runtime (IR) is a specialized cluster of Azure virtual machines (VMs) for SSIS package executions in Azure Data Factory (ADF). When you provision it, it will be dedicated to you, hence it will be charged just like any other dedicated Azure VMs as long as you keep it running, regardless whether you use it to execute SSIS packages or not. With respect to its running cost, youΓÇÖll see the hourly estimate on its setup pane in ADF portal, for example:
--
-In the above example, if you keep your Azure-SSIS IR running for 2 hours, you'll be charged: **2 (hours) x US$1.158/hour = US$2.316**.
-
-To manage your Azure-SSIS IR running cost, you can scale down your VM size, scale in your cluster size, bring your own SQL Server license via Azure Hybrid Benefit (AHB) option that offers significant savings, see [Azure-SSIS IR pricing](https://azure.microsoft.com/pricing/details/data-factory/ssis/), and or start & stop your Azure-SSIS IR whenever convenient/on demand/just in time to process your SSIS workloads, see [Reconfigure Azure-SSIS IR](manage-azure-ssis-integration-runtime.md#to-reconfigure-an-azure-ssis-ir) and [Schedule Azure-SSIS IR](how-to-schedule-azure-ssis-integration-runtime.md).
-
-## Using mapping data flow debug for a normal workday
-
-As a Data Engineer, Sam is responsible for designing, building, and testing mapping data flows every day. Sam logs into the ADF UI in the morning and enables the Debug mode for Data Flows. The default TTL for Debug sessions is 60 minutes. Sam works throughout the day for 8 hours, so the Debug session never expires. Therefore, Sam's charges for the day will be:
-
-**8 (hours) x 8 (compute-optimized cores) x $0.193 = $12.35**
-
-At the same time, Chris, another Data Engineer, also logs into the ADF browser UI for data profiling and ETL design work. Chris does not work in ADF all day like Sam. Chris only needs to use the data flow debugger for 1 hour during the same period and same day as Sam above. These are the charges Chris incurs for debug usage:
-
-**1 (hour) x 8 (general purpose cores) x $0.274 = $2.19**
-
-## Transform data in blob store with mapping data flows
-
-In this scenario, you want to transform data in Blob Store visually in ADF mapping data flows on an hourly schedule.
-
-To accomplish the scenario, you need to create a pipeline with the following items:
-
-1. A Data Flow activity with the transformation logic.
-
-2. An input dataset for the data on Azure Storage.
-
-3. An output dataset for the data on Azure Storage.
-
-4. A schedule trigger to execute the pipeline every hour.
-
-| **Operations** | **Types and Units** |
-| | |
-| Create Linked Service | 2 Read/Write entity |
-| Create Datasets | 4 Read/Write entities (2 for dataset creation, 2 for linked service references) |
-| Create Pipeline | 3 Read/Write entities (1 for pipeline creation, 2 for dataset references) |
-| Get Pipeline | 1 Read/Write entity |
-| Run Pipeline | 2 Activity runs (1 for trigger run, 1 for activity runs) |
-| Data Flow Assumptions: execution time = 10 min + 10 min TTL | 10 \* 16 cores of General Compute with TTL of 10 |
-| Monitor Pipeline Assumption: Only 1 run occurred | 2 Monitoring run records retrieved (1 for pipeline run, 1 for activity run) |
-
-**Total Scenario pricing: $1.4631**
--- Data Factory Operations = **$0.0001**
- - Read/Write = 10\*0.00001 = $0.0001 [1 R/W = $0.50/50000 = 0.00001]
- - Monitoring = 2\*0.000005 = $0.00001 [1 Monitoring = $0.25/50000 = 0.000005]
-- Pipeline Orchestration &amp; Execution = **$1.463**
- - Activity Runs = 0.001\*2 = $0.002 [1 run = $1/1000 = 0.001]
- - Data Flow Activities = $1.461 prorated for 20 minutes (10 mins execution time + 10 mins TTL). $0.274/hour on Azure Integration Runtime with 16 cores general compute
-
-## Data integration in Azure Data Factory Managed VNET
-In this scenario, you want to delete original files on Azure Blob Storage and copy data from Azure SQL Database to Azure Blob Storage. You will do this execution twice on different pipelines. The execution time of these two pipelines is overlapping.
-To accomplish the scenario, you need to create two pipelines with the following items:
- - A pipeline activity ΓÇô Delete Activity.
- - A copy activity with an input dataset for the data to be copied from Azure Blob storage.
- - An output dataset for the data on Azure SQL Database.
- - A schedule triggers to execute the pipeline.
--
-| **Operations** | **Types and Units** |
-| | |
-| Create Linked Service | 4 Read/Write entity |
-| Create Datasets | 8 Read/Write entities (4 for dataset creation, 4 for linked service references) |
-| Create Pipeline | 6 Read/Write entities (2 for pipeline creation, 4 for dataset references) |
-| Get Pipeline | 2 Read/Write entity |
-| Run Pipeline | 6 Activity runs (2 for trigger run, 4 for activity runs) |
-| Execute Delete Activity: each execution time = 5 min. The Delete Activity execution in first pipeline is from 10:00 AM UTC to 10:05 AM UTC. The Delete Activity execution in second pipeline is from 10:02 AM UTC to 10:07 AM UTC.|Total 7 min pipeline activity execution in Managed VNET. Pipeline activity supports up to 50 concurrency in Managed VNET. There is a 60 minutes Time To Live (TTL) for pipeline activity|
-| Copy Data Assumption: each execution time = 10 min. The Copy execution in first pipeline is from 10:06 AM UTC to 10:15 AM UTC. The Copy Activity execution in second pipeline is from 10:08 AM UTC to 10:17 AM UTC. | 10 * 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
-| Monitor Pipeline Assumption: Only 2 runs occurred | 6 Monitoring run records retrieved (2 for pipeline run, 4 for activity run) |
--
-**Total Scenario pricing: $1.45523**
--- Data Factory Operations = $0.00023
- - Read/Write = 20*0.00001 = $0.0002 [1 R/W = $0.50/50000 = 0.00001]
- - Monitoring = 6*0.000005 = $0.00003 [1 Monitoring = $0.25/50000 = 0.000005]
-- Pipeline Orchestration & Execution = $1.455
- - Activity Runs = 0.001*6 = $0.006 [1 run = $1/1000 = 0.001]
- - Data Movement Activities = $0.333 (Prorated for 10 minutes of execution time. $0.25/hour on Azure Integration Runtime)
- - Pipeline Activity = $1.116 (Prorated for 7 minutes of execution time plus 60 minutes TTL. $1/hour on Azure Integration Runtime)
-
-> [!NOTE]
-> These prices are for example purposes only.
-
-**FAQ**
-
-Q: If I would like to run more than 50 pipeline activities, can these activities be executed simultaneously?
-
-A: Max 50 concurrent pipeline activities will be allowed. The 51th pipeline activity will be queued until a ΓÇ£free slotΓÇ¥ is opened up.
-Same for external activity. Max 800 concurrent external activities will be allowed.
+- [Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
## Next steps- Now that you understand the pricing for Azure Data Factory, you can get started! - [Create a data factory by using the Azure Data Factory UI](quickstart-create-data-factory-portal.md)- - [Introduction to Azure Data Factory](introduction.md)- - [Visual authoring in Azure Data Factory](author-visually.md)
data-factory Pricing Examples Copy Transform Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-copy-transform-azure-databricks.md
+
+ Title: "Pricing example: Copy data and transform with Azure Databricks hourly"
+description: This article shows how to estimate pricing for Azure Data Factory to copy data and transform it with Azure Databricks every hour for 30 days.
++++++ Last updated : 09/22/2022++
+# Pricing example: Copy data and transform with Azure Databricks hourly
++
+In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform the data with Azure Databricks on an hourly schedule for 30 days.
+
+The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Configuration
+
+To accomplish the scenario, you need to create a pipeline with the following items:
+
+- One copy activity with an input dataset for the data to be copied from AWS S3, and an output dataset for the data on Azure storage.
+- One Azure Databricks activity for the data transformation.
+- One schedule trigger to execute the pipeline every hour. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
++
+## Costs estimation
+
+| **Operations** | **Types and Units** |
+| | |
+| Run Pipeline | 3 Activity runs per execution (1 for trigger run, 2 for activity runs) |
+| Copy Data Assumption: execution time per run = 10 min | 10 \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+| Execute Databricks activity Assumption: execution time per run = 10 min | 10 min External Pipeline Activity Execution |
+
+## Pricing calculator example
+
+**Total scenario pricing for 30 days: $122.03**
++
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples Copy Transform Dynamic Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-copy-transform-dynamic-parameters.md
+
+ Title: "Pricing example: Copy data and transform with dynamic parameters hourly"
+description: This article shows how to estimate pricing for Azure Data Factory to copy data and transform it with dynamic parameters every hour for 30 days.
++++++ Last updated : 09/22/2022++
+# Copy data and transform with dynamic parameters hourly
++
+In this scenario, you want to copy data from AWS S3 to Azure Blob storage and transform with Azure Databricks (with dynamic parameters in the script) on an hourly schedule.
+
+The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Configuration
+
+To accomplish the scenario, you need to create a pipeline with the following items:
+
+- One copy activity with an input dataset for the data to be copied from AWS S3, an output dataset for the data on Azure storage.
+- One Lookup activity for passing parameters dynamically to the transformation script.
+- One Azure Databricks activity for the data transformation.
+- One schedule trigger to execute the pipeline every hour. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
++
+## Costs estimation
+
+| **Operations** | **Types and Units** |
+| | |
+| Run Pipeline | 4 Activity runs per execution (1 for trigger run, 3 for activity runs) |
+| Copy Data Assumption: execution time per run = 10 min | 10 \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+| Execute Lookup activity Assumption: execution time per run = 1 min | 1 min Pipeline Activity execution |
+| Execute Databricks activity Assumption: execution time per run = 10 min | 10 min External Pipeline Activity execution |
+
+## Pricing example: Pricing calculator example
+
+**Total scenario pricing for 30 days: $122.09**
++
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples Data Integration Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-data-integration-managed-vnet.md
+
+ Title: "Pricing example: Data integration in Azure Data Factory Managed VNET"
+description: This article shows how to estimate pricing for Azure Data Factory to perform data integration using Managed VNET.
++++++ Last updated : 09/22/2022++
+# Pricing example: Data integration in Azure Data Factory Managed VNET
++
+In this scenario, you want to delete original files on Azure Blob Storage and copy data from Azure SQL Database to Azure Blob Storage on an hourly schedule. We'll calculate the price for 30 days. You'll do this execution twice on different pipelines for each run. The execution time of these two pipelines is overlapping.
+
+The prices used in this example below are hypothetical and aren't intended to imply exact actual pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Configuration
+
+To accomplish the scenario, you need to create two pipelines with the following items:
+ - A pipeline activity ΓÇô Delete Activity.
+ - A copy activity with an input dataset for the data to be copied from Azure Blob storage.
+ - An output dataset for the data on Azure SQL Database.
+ - A schedule trigger to execute the pipeline. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
+
+## Costs estimation
+
+| **Operations** | **Types and Units** |
+| | |
+| Run Pipeline | 6 Activity runs per execution (2 for trigger run, 4 for activity runs) |
+| Execute Delete Activity: each execution time = 5 min. If the Delete Activity execution in first pipeline is from 10:00 AM UTC to 10:05 AM UTC and the Delete Activity execution in second pipeline is from 10:02 AM UTC to 10:07 AM UTC.|Total 7 min pipeline activity execution in Managed VNET. Pipeline activity supports up to 50 concurrency in Managed VNET. There's a 60 minutes Time To Live (TTL) for pipeline activity|
+| Copy Data Assumption: each execution time = 10 min if the Copy execution in first pipeline is from 10:06 AM UTC to 10:15 AM UTC and the Copy Activity execution in second pipeline is from 10:08 AM UTC to 10:17 AM UTC. | 10 * 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+
+## Pricing calculator example
+
+**Total scenario pricing for 30 days: $129.02**
++
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples Get Delta Data From Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-get-delta-data-from-sap-ecc.md
+
+ Title: "Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows"
+description: This article shows how to price getting delta data from SAP ECC via SAP CDC in mapping data flows.
++++++ Last updated : 09/22/2022++
+# Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows
++
+In this scenario, you want to get delta changes from one table in SAP ECC via SAP CDC connector, do a few necessary transforms in flight, and then write data to Azure Data Lake Gen2 storage in ADF mapping dataflow daily.
+
+The prices used in this example below are hypothetical and aren't intended to imply exact actual pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Configuration
+
+To accomplish the scenario, you need to create a pipeline with the following items:
+
+- One Mapping Data Flow activity with an input dataset for the data to be loaded from SAP ECC, the transformation logic, and an output dataset for the data on Azure Data Lake Gen2 storage.
+- A Self-Hosted Integration Runtime referenced to SAP CDC connector.
+- A schedule trigger to execute the pipeline. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
+
+## Costs estimation
+
+In order to load data from SAP ECC via SAP CDC connector in Mapping Data Flow, you need to install your Self-Hosted Integration Runtime on an on-premises machine, or VM to directly connect to your SAP ECC system. Given that, you'll be charged on both Self-Hosted Integration Runtime with $0.10/hour and Mapping Data Flow with its vCore-hour price unit.
+
+Assuming every time it requires 15 minutes to complete the job, the cost estimations are as below.
+
+| **Operations** | **Types and Units** |
+| | |
+| Run Pipeline | 2 Activity runs per execution (1 for trigger run, 1 for activity run) |
+| Data Flow: execution time per run = 15 mins | 15 min * 8 cores of General Compute |
+| Self-Hosted Integration Runtime: execution time per run = 15 mins | 15 min * $0.10/hour (Data Movement Activity on Self-Hosted Integration Runtime Price) |
+
+## Pricing calculator example
+
+**Total scenario pricing for 30 days: $17.21**
++
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
data-factory Pricing Examples Mapping Data Flow Debug Workday https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-mapping-data-flow-debug-workday.md
+
+ Title: "Pricing example: Using mapping data flow debug for a normal workday"
+description: This article shows how to estimate pricing for Azure Data Factory to use mapping data flow debug for a normal workday.
++++++ Last updated : 09/22/2022++
+# Pricing example: Using mapping data flow debug for a normal workday
++
+This example shows mapping data flow debug costs for a typical workday for a data engineer.
+
+The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Azure Data Factory engineer
+
+A data factory engineer is responsible for designing, building, and testing mapping data flows every day. The engineer logs into the ADF UI in the morning and enables the Debug mode for Data Flows. The default TTL for Debug sessions is 60 minutes. The engineer works throughout the day for 8 hours, so the Debug session never expires. Therefore, Sam's charges for the day will be:
+
+**8 (hours) x 8 (compute-optimized cores) x $0.193 = $12.35**
+
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples S3 To Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-s3-to-blob.md
+
+ Title: "Pricing example: Copy data from AWS S3 to Azure Blob storage hourly"
+description: This article shows how to estimate pricing for Azure Data Factory to copy data from AWS S3 to Azure Blob storage every hour for 30 days.
++++++ Last updated : 09/22/2022++
+# Pricing example: Copy data from AWS S3 to Azure Blob storage hourly
++
+In this scenario, you want to copy data from AWS S3 to Azure Blob storage on an hourly schedule for 8 hours per day, for 30 days.
+
+The prices used in this example below are hypothetical and aren't intended to imply exact actual pricing. Read/write and monitoring costs aren't shown since they're typically negligible and won't impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Configuration
+
+To accomplish the scenario, you need to create a pipeline with the following items:
+
+- I'll copy data from AWS S3 to Azure Blob storage, and this will move 10 GB of data from S3 to blob storage. I estimate it will run for 2-3 hours, and I plan to set DIU as Auto.
+- A schedule trigger to execute the pipeline every hour for 8 hours every day. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
+
+ :::image type="content" source="media/pricing-concepts/scenario1.png" alt-text="Diagram shows a pipeline with a schedule trigger.":::
+
+## Costs estimation
+
+| **Operations** | **Types and Units** |
+| | |
+| Run Pipeline | 2 Activity runs per execution (1 for the trigger to run, 1 for activity to run) |
+| Copy Data Assumption: execution hours **per run** | 0.5 hours \* 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+| Total execution hours: 8 runs per day for 30 days | 240 runs * 2 DIU/run = 480 DIUs |
+
+## Pricing calculator example
+
+**Total scenario pricing for 30 days: $122.00**
++
+## Next steps
+
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples Ssis On Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-ssis-on-azure-ssis-integration-runtime.md
+
+ Title: "Pricing example: Run SSIS packages on Azure-SSIS integration runtime"
+description: This article shows how to estimate pricing for Azure Data Factory to run SSIS packages with the Azure-SSIS integration runtime.
++++++ Last updated : 09/22/2022++
+# Pricing example: Run SSIS packages on Azure-SSIS integration runtime
++
+In this article you will see how to estimate costs to use Azure Data Factory to run SSIS packages with the Azure-SSIS integration runtime.
+
+The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Pricing model for Azure-SSIS integration runtime
+
+The Azure-SSIS integration runtime (IR) is a specialized cluster of Azure virtual machines (VMs) for SSIS package executions in Azure Data Factory (ADF). When you provision it, it will be dedicated to you, hence it will be charged just like any other dedicated Azure VMs as long as you keep it running, regardless whether you use it to execute SSIS packages or not. With respect to its running cost, youΓÇÖll see the hourly estimate on its setup pane in ADF portal, for example:
++
+### Azure Hybrid Benefit (AHB)
+
+Azure Hybrid Benefit (AHB) can reduce the cost of your Azure-SSIS integration runtime (IR). Using the AHB, you can provide your own SQL license, which reduces the cost of the Azure-SSIS IR from $1.938/hour to $1.158/hour. To learn more about AHB, visit the [Azure Hybrid Benefit (AHB)](https://azure.microsoft.com/pricing/hybrid-benefit/) article.
++
+## Cost Estimation
+
+In the above example, if you keep your Azure-SSIS IR running for 2 hours, using AHB to bring your own SQL license, you'll be charged: **2 (hours) x US$1.158/hour = US$2.316**.
+
+To manage your Azure-SSIS IR running cost, you can scale down your VM size, scale in your cluster size, bring your own SQL Server license via Azure Hybrid Benefit (AHB) option that offers significant savings, see [Azure-SSIS IR pricing](https://azure.microsoft.com/pricing/details/data-factory/ssis/), and or start & stop your Azure-SSIS IR whenever convenient/on demand/just in time to process your SSIS workloads, see [Reconfigure Azure-SSIS IR](manage-azure-ssis-integration-runtime.md#to-reconfigure-an-azure-ssis-ir) and [Schedule Azure-SSIS IR](how-to-schedule-azure-ssis-integration-runtime.md).
+
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Transform data in blob store with mapping data flows](pricing-examples-transform-mapping-data-flows.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Pricing Examples Transform Mapping Data Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-transform-mapping-data-flows.md
+
+ Title: "Pricing example: Transform data in blob store with mapping data flows"
+description: This article shows how to estimate pricing for Azure Data Factory to transform data in a blob store with mapping data flows.
++++++ Last updated : 09/22/2022++
+# Pricing example: Transform data in blob store with mapping data flows
++
+In this scenario, you want to transform data in Blob Store visually in ADF mapping data flows on an hourly schedule for 30 days.
+
+The prices used in this example below are hypothetical and are not intended to imply exact actual pricing. Read/write and monitoring costs are not shown since they are typically negligible and will not impact overall costs significantly. Activity runs are also rounded to the nearest 1000 in pricing calculator estimates.
+
+Refer to the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) for more specific scenarios and to estimate your future costs to use the service.
+
+## Configuration
+
+To accomplish the scenario, you need to create a pipeline with the following items:
+
+- A Data Flow activity with the transformation logic.
+- An input dataset for the data on Azure Storage.
+- An output dataset for the data on Azure Storage.
+- A schedule trigger to execute the pipeline every hour. When you want to run a pipeline, you can either [trigger it immediately or schedule it](concepts-pipeline-execution-triggers.md). In addition to the pipeline itself, each trigger instance counts as a single Activity run.
+
+## Costs estimation
+
+| **Operations** | **Types and Units** |
+| | |
+| Run Pipeline | 2 Activity runs per execution (1 for trigger run, 1 for activity runs) |
+| Data Flow Assumptions: execution time per run = 10 min + 10 min TTL | 10 \* 16 cores of General Compute with TTL of 10 |
+
+## Pricing calculator example
+
+**Total scenario pricing for 30 days: $1051.28**
+++
+## Next steps
+
+- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md)
+- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
+- [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
+- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md)
+- [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md)
+- [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Tumbling Window Trigger Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tumbling-window-trigger-dependency.md
Previously updated : 09/22/2022 Last updated : 09/27/2022 # Create a tumbling window trigger dependency
You can see the status of the dependencies, and windows for each dependent trigg
A tumbling window trigger will wait on dependencies for _seven days_ before timing out. After seven days, the trigger run will fail.
+> [!NOTE]
+> A tumbling window trigger cannot be cancelled while it is in the **Waiting on dependency** state. The dependent activity must finish before the tumbling window trigger can be cancelled. This is by design to ensure dependent activities can complete once started, and helps reduce the likelihood of unexpected results.
+ For a more visual to view the trigger dependency schedule, select the Gantt view. :::image type="content" source="media/tumbling-window-trigger-dependency/tumbling-window-dependency-09.png" alt-text="Monitor dependencies gantt chart":::
defender-for-iot Architecture Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture-connections.md
Title: OT sensor cloud connection methods - Microsoft Defender for IoT description: Learn about the architecture models available for connecting your sensors to Microsoft Defender for IoT. Previously updated : 03/08/2022 Last updated : 09/11/2022 # OT sensor cloud connection methods This article describes the architectures and methods supported for connecting your Microsoft Defender for IoT OT sensors to the cloud.
-All supported cloud connection methods provide:
+The cloud connection methods described in this article are supported only for OT sensor version 22.x and later. All methods provide:
- **Simple deployment**, requiring no extra installations in your private Azure environment, such as for an IoT Hub
With direct connections
For more information, see [Connect directly](connect-sensors.md#connect-directly).
-## Multi-cloud connections
+## Multicloud connections
You can connect your sensors to the Defender for IoT portal in Azure from other public clouds for OT/IoT management process monitoring.
Depending on your environment configuration, you might connect using one of the
- A site-to-site VPN over the internet.
-For more information, see [Connect via multi-cloud vendors](connect-sensors.md#connect-via-multi-cloud-vendors).
+For more information, see [Connect via multicloud vendors](connect-sensors.md#connect-via-multicloud-vendors).
## Working with a mixture of sensor software versions
defender-for-iot Sample Connectivity Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/sample-connectivity-models.md
This article provides sample network models for Microsoft Defender for IoT senso
The following diagram shows an example of a ring network topology, in which each switch or node connects to exactly two other switches, forming a single continuous pathway for the traffic. ## Sample: Linear bus and star topology In a star network, every host is connected to a central hub. In its simplest form, one central hub acts as a conduit to transmit messages. In the following example, lower switches aren't monitored, and traffic that remains local to these switches won't be seen. Devices might be identified based on ARP messages, but connection information will be missing. ## Sample: Multi-layer, multi-tenant network
The following diagram is a general abstraction of a multilayer, multitenant netw
Typically, NTA sensors are deployed in layers 0 to 3 of the OSI model. ## Next steps
defender-for-iot Connect Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/connect-sensors.md
Title: Connect OT sensors to Microsoft Defender for IoT in the cloud description: Learn how to connect your Microsoft Defender for IoT OT sensors to the cloud Previously updated : 06/02/2022 Last updated : 09/11/2022 # Connect your OT sensors to the cloud
-This article describes how to connect your sensors to the Defender for IoT portal in Azure.
+This article describes how to connect your OT network sensors to the Defender for IoT portal in Azure, for OT sensor software versions 22.x and later.
For more information about each connection method, see [Sensor connection methods](architecture-connections.md).
+## Prerequisites
+
+To use the connection methods described in this article, you must have an OT network sensor with software version 22.x or later.
+
+For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md).
## Choose a sensor connection method
Use this section to help determine which connection method is right for your org
|- You require private connectivity between your sensor and Azure, <br>- Your site is connected to Azure via ExpressRoute, or <br>- Your site is connected to Azure over a VPN | **[Connect via an Azure proxy](#connect-via-an-azure-proxy)** | |- Your sensor needs a proxy to reach from the OT network to the cloud, or <br>- You want multiple sensors to connect to Azure through a single point | **[Connect via proxy chaining](#connect-via-proxy-chaining)** | |- You want to connect your sensor to Azure directly | **[Connect directly](#connect-directly)** |
-|- You have sensors hosted in multiple public clouds | **[Connect via multi-cloud vendors](#connect-via-multi-cloud-vendors)** |
+|- You have sensors hosted in multiple public clouds | **[Connect via multicloud vendors](#connect-via-multicloud-vendors)** |
## Connect via an Azure proxy
Before you start, make sure that you have:
- A proxy server resource, with firewall permissions to access Microsoft cloud services. The procedure described in this article uses a Squid server hosted in Azure. -- Outbound HTTPS traffic on port 443 to the following hostnames:-
- - **IoT Hub**: `*.azure-devices.net`
- - **Blob storage**: `*.blob.core.windows.net`
- - **EventHub**: `*.servicebus.windows.net`
- - **Microsoft Download Center**: `download.microsoft.com`
+- Outbound HTTPS traffic on port 443 enabled to the required endpoints for Defender for IoT. Download the list of required endpoints from the **Sites and sensors** page: Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**.
> [!IMPORTANT] > Microsoft Defender for IoT does not offer support for Squid or any other proxy services. It is the customer's responsibility to set up and maintain the proxy service.
This procedure describes how to install and configure a connection between your
sudo systemctl enable squid ```
-1. Connect your proxy to Defender for IoT. Enable outbound HTTP traffic on port 443 from the sensor to the following Azure hostnames:
+1. Connect your proxy to Defender for IoT:
+
+ 1. Download the list of required endpoints from the **Sites and sensors** page: Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**.
+ 1. Enable outbound HTTPS traffic on port 443 from the sensor to each of the required endpoints for Defender for IoT.
- - **IoT Hub**: `*.azure-devices.net`
- - **Threat Intelligence**: `*.blob.core.windows.net`
- - **Eventhub**: `*.servicebus.windows.net`
- - **Microsoft download site**: `download.microsoft.com`
> [!IMPORTANT] > Some organizations must define firewall rules by IP addresses. If this is true for your organization, it's important to know that the Azure public IP ranges are updated weekly.
This procedure describes how to install and configure a connection between your
This section describes what you need to configure a direct sensor connection to Defender for IoT in Azure. For more information, see [Direct connections](architecture-connections.md#direct-connections).
-1. Ensure that your sensor can access the cloud using HTTP on port 443 to the following Microsoft domains:
+1. Download the list of required endpoints from the **Sites and sensors** page on the Azure portal. Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**.
- - **IoT Hub**: `*.azure-devices.net`
- - **Threat Intelligence**: `*.blob.core.windows.net`
- - **Eventhub**: `*.servicebus.windows.net`
- - **Microsoft Download Center**: `download.microsoft.com`
+1. Ensure that your sensor can access the cloud using HTTPS on port 443 to each of the listed endpoints in the downloaded list.
1. Azure public IP addresses are updated weekly. If you must define firewall rules based on IP addresses, make sure to download the new JSON file each week and make the required changes on your site to correctly identify services running in Azure. You'll need the updated IP ranges for **AzureIoTHub**, **Storage**, and **EventHub**. See the [latest IP ranges](https://www.microsoft.com/en-us/download/details.aspx?id=56519).
-## Connect via multi-cloud vendors
+## Connect via multicloud vendors
-This section describes how to connect your sensor to Defender for IoT in Azure from sensors deployed in one or more public clouds. For more information, see [Multi-cloud connections](architecture-connections.md#multi-cloud-connections).
+This section describes how to connect your sensor to Defender for IoT in Azure from sensors deployed in one or more public clouds. For more information, see [Multicloud connections](architecture-connections.md#multicloud-connections).
### Prerequisites
Before you start:
- Make sure that you have a sensor deployed in a public cloud, such as AWS or Google Cloud, and configured to monitor SPAN traffic. -- Choose the multi-cloud connectivity method that's right for your organization:
+- Choose the multicloud connectivity method that's right for your organization:
Use the following flow chart to determine which connectivity method to use:
- :::image type="content" source="media/architecture-connections/multi-cloud-flow-chart.png" alt-text="Flow chart to determine which connectivity method to use.":::
+ :::image type="content" source="media/architecture-connections/multicloud-flow-chart.png" alt-text="Flow chart to determine which connectivity method to use.":::
- **Use public IP addresses over the internet** if you don't need to exchange data using private IP addresses
If you're an existing customer with a production deployment and sensors connecte
- Check the active resources in your account and make sure there are no other services connected to your IoT Hub.
- - If you're running a hybrid environment with multiple sensor versions, make sure any sensors with software version 22.1.x can connect to Azure. Use firewall rules that allow outbound HTTPS traffic on port 443 to the following hostnames:
+ - If you're running a hybrid environment with multiple sensor versions, make sure any sensors with software version 22.1.x can connect to Azure. Use firewall rules that allow outbound HTTPS traffic on port 443 to each of the required endpoints.
- - **IoT Hub**: `*.azure-devices.net`
- - **Threat Intelligence**: `*.blob.core.windows.net`
- - **EventHub**: `*.servicebus.windows.net`
- - **Microsoft Download Center**: `download.microsoft.com`
+ Find the list of required endpoints for Defender for IoT from the **Sites and sensors** page on the Azure portal. Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**.
While you'll need to migrate your connections before the [legacy version reaches end of support](release-notes.md#versioning-and-support-for-on-premises-software-versions), you can currently deploy a hybrid network of sensors, including legacy software versions with their IoT Hub connections, and sensors with the connection methods described in this article.
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Send mail that includes the alert information. You can enter one email address p
1. Select **Save**.
+>[!NOTE]
+>Make sure you also add an SMTP server to System Settings -> Integrations -> SMTP Server in order for the EMAIL forwarding rule to function
+ ### Syslog server actions The following formats are supported:
defender-for-iot How To Manage Device Inventory For Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md
This procedure describes how to view detected devices in the **Device inventory*
|**Modify columns shown** | Select **Edit columns** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/edit-columns-icon.png" border="false":::. In the **Edit columns** pane:<br><br> - Select the **+ Add Column** button to add new columns to the grid.<br> - Drag and drop fields to change the columns order.<br>- To remove a column, select the **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/trashcan-icon.png" border="false"::: icon to the right.<br>- To reset the columns to their default settings, select **Reset** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/reset-icon.png" border="false":::. <br><br>Select **Save** to save any changes made. | | **Group devices** | From the **Group by** above the gird, select either **Type** or **Class** to group the devices shown. Inside each group, devices retain the same column sorting. To remove the grouping, select **No grouping**. |
+ For more information, see [Device inventory column reference](#device-inventory-column-reference).
+ 1. Select a device row to view more details about that device. Initial details are shown in a pane on the right, where you can also select **View full details** to drill down more. For example: :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/device-information-window.png" alt-text="Screenshot of a device details pane and the View full details button in the Azure portal." lightbox="media/how-to-manage-device-inventory-on-the-cloud/device-information-window.png":::
-For more information, see [Device inventory column reference](#device-inventory-column-reference).
### Identify devices that aren't connecting successfully
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
You'll receive an error message if the activation file couldn't be uploaded. The
- **For locally connected sensors**: The activation file isn't valid. If the file isn't valid, go to [Defender for IoT in the Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). On the **Sensor Management** page, select the sensor with the invalid file, and download a new activation file. -- **For cloud-connected sensors**: The sensor can't connect to the internet. Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that \*.azure-devices.net:443 is allowed in the firewall and/or proxy. If wildcards are not supported or you want more control, the FQDN for your specific endpoint (either a sensor, or for legacy connections, an IoT hub) should be opened in your firewall and/or proxy. For more information, see [Reference - IoT Hub endpoints](../../iot-hub/iot-hub-devguide-endpoints.md).
+- **For cloud-connected sensors**: The sensor can't connect to the internet. Check the sensor's network configuration. If your sensor needs to connect through a web proxy to access the internet, verify that your proxy server is configured correctly on the **Sensor Network Configuration** screen. Verify that the required endpoints are allowed in the firewall and/or proxy.
+
+ For OT sensors version 22.x, download the list of required endpoints from the **Sites and sensors** page on the Azure portal. Select an OT sensor with a supported software version, or a site with one or more supported sensors. And then select **More actions** > **Download endpoint details**. For sensors with earlier versions, see [Sensor access to Azure portal](how-to-set-up-your-network.md#sensor-access-to-azure-portal).
- **For cloud-connected sensors**: The activation file is valid but Defender for IoT rejected it. If you can't resolve this problem, you can download another activation from the **Sites and Sensors** page in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/Getting_Started). If this doesn't work, contact Microsoft Support.
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
Title: Manage sensors with Defender for IoT in the Azure portal description: Learn how to onboard, view, and manage sensors with Defender for IoT in the Azure portal. Previously updated : 08/08/2022 Last updated : 09/08/2022
This article describes how to view and manage sensors with [Defender for IoT in
This procedure describes how to use the Azure portal to contact vendors for pre-configured appliances, or how to download software for you to install on your own appliances. 1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **Sensor**.
-
+ 1. Do one of the following: - To buy a pre-configured appliance, select **Contact** under **Buy preconfigured appliance**. This opens an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com) with a template request for Defender for IoT appliances. For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md).
Use the options on the **Sites and sensor** page and a sensor details page to do
|:::image type="icon" source="medi#install-the-sensor-software). | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit automatic threat intelligence updates** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. <br><br>Select **Edit** and then toggle the **Automatic Threat Intelligence Updates (Preview)** option on or off as needed. Select **Submit** to save your changes. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-delete.png" border="false"::: **Delete a sensor** | For individual sensors only, from the **...** options menu or a sensor details page. |
+| :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-diagnostics.png" border="false"::: **Send diagnostic files to support** | Individual, locally managed OT sensors only. <br><br>Available from the **...** options menu. <br><br>For more information, see [Upload a diagnostics log for support (Public preview)](#upload-a-diagnostics-log-for-support-public-preview).|
| **Download SNMP MIB file** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Set up SNMP MIB monitoring](how-to-set-up-snmp-mib-monitoring.md).| | **Recover an on-premises management console password** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). |
-| :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-diagnostics.png" border="false"::: **Send diagnostic files to support** | Individual, locally managed OT sensors only. <br><br>Available from the **...** options menu. <br><br>For more information, see [Upload a diagnostics log for support (Public preview)](#upload-a-diagnostics-log-for-support-public-preview).|
+| **Download endpoint details** (Public preview) | Available from the **Sites and sensors** toolbar **More actions** menu, for OT sensor versions 22.x only. <br><br>Download the list of endpoints that must be enabled as secure endpoints from OT network sensors. Make sure that HTTPS traffic is enabled over port 443 to the listed endpoints for your sensor to connect to Azure. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.<br><br>To enable this option, select a sensor with a supported software version, or a site with one or more sensors with supported versions. |
## Reactivate an OT sensor
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
Use the following tables to ensure that required firewalls are open on your work
| Protocol | Transport | In/Out | Port | Purpose | Source | Destination | |--|--|--|--|--|--|--|
-| HTTPS | TCP | Out | 443 | Access to Azure | Sensor | `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net`|
-| HTTPS | TCP | Out | 443 | Remote sensor upgrades from the Azure portal | Sensor| `download.microsoft.com`|
+| HTTPS | TCP | Out | 443 | Access to Azure | Sensor |**For OT sensor versions 22.x**: Download the list from the **Sites and sensors** page in the Azure portal. Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More options > Download endpoint details**. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).<br><br>**For OT sensor versions 10.x**: `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net`|
+| HTTPS | TCP | Out | 443 | Remote sensor updates from the Azure portal | Sensor| `download.microsoft.com`|
+ ### Sensor access to the on-premises management console
defender-for-iot Integrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md
Integrate Microsoft Defender for Iot with partner services to view partner data
|Name |Description |Support scope |Supported by |Learn more | ||||||
-|**Defender for IoT data connector** | Displays Defender for IoT data in Microsoft Sentinel, supporting end-to-end SOC investigations for Defender for IoT alerts. | - OT and Enterprise IoT networks <br>- Cloud-connected sensors | Microsoft | [Integrate Microsoft Sentinel and Microsoft Defender for IoT](/azure/sentinel/iot-solution?tabs=use-out-of-the-box-analytics-rules-recommended) |
+|**Defender for IoT data connector in Sentinel** | Displays Defender for IoT data in Microsoft Sentinel, supporting end-to-end SOC investigations for Defender for IoT alerts. | - OT and Enterprise IoT networks <br>- Cloud-connected sensors | Microsoft | [Integrate Microsoft Sentinel and Microsoft Defender for IoT](/azure/sentinel/iot-solution?tabs=use-out-of-the-box-analytics-rules-recommended) |
+|**Sentinel** | Send Defender for IoT alerts to Sentinel. | - OT networks <br>- Locally managed sensors and on-premises management consoles | Microsoft | |
## Palo Alto
defender-for-iot Integrate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-with-active-directory.md
You can associate Active Directory groups defined here with specific permission
| Domain controller port | Define the port on which your LDAP is configured. | | Primary domain | Set the domain name (for example, `subdomain.domain.com`) and the connection type according to your LDAP configuration. | | Active Directory groups | Enter the group names that are defined in your Active Directory configuration on the LDAP server. You can enter a group name that you'll associate with Admin, Security Analyst and Read-only permission levels. Use these groups when creating new sensor users.|
- | Trusted domains | To add a trusted domain, add the domain name and the connection type of a trusted domain. <br />You can configure trusted domains only for users who were defined under users. |
+ | Trusted endpoints | To add a trusted domain, add the domain name and the connection type of a trusted domain. <br />You can configure trusted endpoints only for users who were defined under users. |
### Active Directory groups for the on-premises management console
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
You can [order](mailto:hardware.sales@arrow.com) any of the following preconfigu
||||| |**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) | |**E1800** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
-|**E500** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
+|**E1800** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/512GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
|**L500** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 | |**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
|Service area |Updates | |||
-|**OT networks** |**Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm<br><br>- **Microsoft Sentinel integration**: <br>- [Investigation enhancements with IOT device entities](#investigation-enhancements-with-iot-device-entities-in-microsoft-sentinel)<br>- [Updates to the Microsoft Defender for IoT solution](#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub) |
+|**OT networks** |**All supported OT sensor software versions**: <br>- [Device vulnerabilities from the Azure portal](#device-vulnerabilities-from-the-azure-portal-public-preview)<br>- [Security recommendations for OT networks](#security-recommendations-for-ot-networks-public-preview)<br><br> **All OT sensor software versions 22.x**: [Updates for Azure cloud connection firewall rules](#updates-for-azure-cloud-connection-firewall-rules-public-preview) <br><br>**Sensor software version 22.2.6**: <br> - Bug fixes and stability improvements <br>- Enhancements to the device type classification algorithm<br><br>**Microsoft Sentinel integration**: <br>- [Investigation enhancements with IoT device entities](#investigation-enhancements-with-iot-device-entities-in-microsoft-sentinel)<br>- [Updates to the Microsoft Defender for IoT solution](#updates-to-the-microsoft-defender-for-iot-solution-in-microsoft-sentinels-content-hub)|
-### Investigation enhancements with IOT device entities in Microsoft Sentinel
+### Security recommendations for OT networks (Public preview)
+
+Defender for IoT now provides security recommendations to help customers manage their OT/IoT network security posture. Defender for IoT recommendations help users form actionable, prioritized mitigation plans that address the unique challenges of OT/IoT networks. Use recommendations for lower your network's risk and attack surface.
+
+You can see the following security recommendations from the Azure portal for detected devices across your networks:
+
+- **Review PLC operating mode**. Devices with this recommendation are found with PLCs set to unsecure operating mode states. We recommend setting PLC operating modes to the **Secure Run** state if access is no longer required to the PLC to reduce the threat of malicious PLC programming.
+
+- **Review unauthorized devices**. Devices with this recommendation must be identified and authorized as part of the network baseline. We recommend taking action to identify any indicated devices. Disconnect any devices from your network that remain unknown even after investigation to reduce the threat of rogue or potentially malicious devices.
+
+Access security recommendations from one of the following locations:
+
+- The **Recommendations** page, which displays all current recommendations across all detected OT devices.
+
+- The **Recommendations** tab on a device details page, which displays all current recommendations for the selected device.
+
+From either location, select a recommendation to drill down further and view lists of all detected OT devices that are currently in a *healthy* or *unhealthy* state, according to the selected recommendation. From the **Unhealthy devices** or **Healthy devices** tab, select a device link to jump to the selected device details page. For example:
++
+For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
+
+### Device vulnerabilities from the Azure portal (Public preview)
+
+Defender for IoT now provides vulnerability data in the Azure portal for detected OT network devices. Vulnerability data is based on the repository of standards based vulnerability data documented at the [US government National Vulnerability Database (NVD)](https://www.nist.gov/programs-projects/national-vulnerability-database-nvd).
+
+Access vulnerability data in the Azure portal from the following locations:
+
+- On a device details page select the **Vulnerabilities** tab to view current vulnerabilities on the selected device. For example, from the **Device inventory** page, select a specific device and then select **Vulnerabilities**.
+
+ For more information, see [View the device inventory](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory).
+
+- A new **Vulnerabilities** workbook displays vulnerability data across all monitored OT devices. Use the **Vulnerabilities** workbook to view data like CVE by severity or vendor, and full lists of detected vulnerabilities and vulnerable devices and components.
+
+ Select an item in the **Device vulnerabilities**, **Vulnerable devices**, or **Vulnerable components** tables to view related information in the tables on the right.
+
+ For example:
+
+ :::image type="content" source="media/release-notes/vulnerabilities-workbook.png" alt-text="Screenshot of a Vulnerabilities workbook in Defender for IoT.":::
+
+ For more information, see [Use Azure Monitor workbooks in Microsoft Defender for IoT](workbooks.md).
+
+### Updates for Azure cloud connection firewall rules (Public preview)
+
+OT network sensors connect to Azure to provide alert and device data and sensor health messages, access threat intelligence packages, and more. Connected Azure services include IoT Hub, Blob Storage, Event Hubs, and the Microsoft Download Center.
+
+For OT sensors with software versions 22.x and higher, Defender for IoT now supports increased security when adding outbound allow rules for connections to Azure. Now you can define your outbound allow rules to connect to Azure without using wildcards.
+
+When defining outbound allow rules to connect to Azure, you'll need to enable HTTPS traffic to each of the required endpoints on port 443. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.
+
+For supported sensor versions, download the full list of required secure endpoints from the following locations in the Azure portal:
+
+- **A successful sensor registration page**: After onboarding a new OT sensor, version 22.x, the successful registration page now provides instructions for next steps, including a link to the endpoints you'll need to add as secure outbound allow rules on your network. Select the **Download endpoint details** link to download the JSON file.
+
+ For example:
+
+ :::image type="content" source="media/release-notes/download-endpoints.png" alt-text="Screenshot of a successful OT sensor registration page with the download endpoints link.":::
+
+- **The Sites and sensors page**: Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More actions** > **Download endpoint details** to download the JSON file. For example:
+
+ :::image type="content" source="media/release-notes/download-endpoints-sites-sensors.png" alt-text="Screenshot of the Sites and sensors page with the download endpoint details link.":::
+
+For more information, see:
+
+- [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md)
+- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)
+- [Networking requirements](how-to-set-up-your-network.md#sensor-access-to-azure-portal)
+
+### Investigation enhancements with IoT device entities in Microsoft Sentinel
Defender for IoT's integration with Microsoft Sentinel now supports an IoT device entity page. When investigating incidents and monitoring IoT security in Microsoft Sentinel, you can now identify your most sensitive devices and jump directly to more details on each device entity page.
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
This procedure describes how to prepare your physical appliance or VM to install
| DNS | TCP/UDP | In/Out | 53 | Address resolution |
-1. Make sure that your physical appliance or VM can access the cloud using HTTP on port 443 to the following Microsoft domains:
+1. Make sure that your physical appliance or VM can access the cloud using HTTPS on port 443 to the following Microsoft endpoints:
- **EventHub**: `*.servicebus.windows.net` - **Storage**: `*.blob.core.windows.net`
This procedure describes how to prepare your physical appliance or VM to install
- **IoT Hub**: `*.azure-devices.net` > [!TIP]
- > You can also download and add the [Azure public IP ranges](https://www.microsoft.com/download/details.aspx?id=56519) so your firewall will allow the Azure domains that are specified above, along with their region.
+ > You can also download and add the [Azure public IP ranges](https://www.microsoft.com/download/details.aspx?id=56519) so your firewall will allow the Azure endpoints that are specified above, along with their region.
> > The Azure public IP ranges are updated weekly. New ranges appearing in the file will not be used in Azure for at least one week. To use this option, download the new json file every week and perform the necessary changes at your site to correctly identify services running in Azure.
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Last updated 07/11/2022
# Tutorial: Get started with Microsoft Defender for IoT for OT security
-This tutorial describes how to set up your network for OT system security monitoring, using a virtual, cloud-connected sensor, on a virtual machine (VM), using a trial subscription of Microsoft Defender for IoT.
+This tutorial describes how to set up your network for OT system security monitoring, using a virtual, cloud-connected sensor, on a virtual machine (VM), using a trial subscription of Microsoft Defender for IoT.
> [!NOTE] > If you're looking to set up security monitoring for enterprise IoT systems, see [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md) instead.
This procedure describes how to configure a SPAN port using a workaround with VM
1. Connect to the sensor, and verify that mirroring works.
-## Verify cloud connections
-
-This tutorial describes how to create a cloud-connected sensor, connecting directly to the Defender for IoT on the cloud.
-
-Before continuing, make sure that your sensor can access the cloud using HTTP on port 443 to the following Microsoft domains:
--- **IoT Hub**: `*.azure-devices.net`-- **Blob Storage**: `*.blob.core.windows.net`-- **Eventhub**: `*.servicebus.windows.net`-- **Microsoft Download Center**: `download.microsoft.com`-
-> [!TIP]
-> Defender for IoT supports other cloud-connection methods, including proxies or multi-cloud vendors. For more information, see [OT sensor cloud connection methods](architecture-connections.md), [Connect your OT sensors to the cloud](connect-sensors.md), [Cloud-connected vs local sensors](architecture.md#cloud-connected-vs-local-sensors).
->
- ## Onboard and activate the virtual sensor Before you can start using your Defender for IoT sensor, you'll need to onboard your new virtual sensor to your Azure subscription, and download the virtual sensor's activation file to activate the sensor.
Before you can start using your Defender for IoT sensor, you'll need to onboard
[!INCLUDE [root-of-trust](includes/root-of-trust.md)] - 1. Save the downloaded activation file in a location that will be accessible to the user signing into the console for the first time.
+ You can also download the file manually by selecting the relevant link in the **Activate your sensor** box. You'll use this file to activate your sensor, as described [below](#activate-your-sensor).
+
+1. Make sure that your new sensor will be able to successfully connect to Azure. In the **Add outbound allow rules** box, select the **Download endpoint details** link to download a JSON list of the endpoints you must configure as secure endpoints from your sensor. For example:
+
+ :::image type="content" source="media/release-notes/download-endpoints.png" alt-text="Screenshot of the **Add outbound allow rules** box.":::
+
+ To ensure that your sensor can connect to Azure, configure the listed endpoints as allowed outbound HTTPS traffic over port 443. You'll need to configure these outbound allow rules once for all OT sensors onboarded to the same subscription
+
+ > [!TIP]
+ > You can also access the list of required endpoints from the **Sites and sensors** page. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
+ 1. At the bottom left of the page, select **Finish**. You can now see your new sensor listed on the Defender for IoT **Sites and sensors** page. For more information, see [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
When downloading your update files from the Azure portal, youΓÇÖll see the optio
Make sure to select the file that matches your upgrade scenario.
-Updates from legacy versions may require a series of software updates. For example, if you still have a sensor version 3.1.1 installed, you'll need to first upgrade to version 10.5.5, and then to a 22.x version.
+Updates from legacy versions may require a series of software updates: If you still have a sensor version 3.1.1 installed, you'll need to first upgrade to version 10.5.5, and then to a 22.x version. For example:
:::image type="content" source="media/update-ot-software/legacy.png" alt-text="Screenshot of the multiple download options displayed.":::
Updates from legacy versions may require a series of software updates. For examp
For more information, see [OT sensor cloud connection methods](architecture-connections.md) and [Connect your OT sensors to the cloud](connect-sensors.md). -- Make sure that your firewall rules are configured as needed for the new version you're updating to. For example, the new version may require a new or modified firewall rule to support sensor access to the Azure portal. From the **Sites and sensors** page, select **More actions > Download sensor endpoint details** for the full list of domains required to access the Azure portal.
+- Make sure that your firewall rules are configured as needed for the new version you're updating to. For example, the new version may require a new or modified firewall rule to support sensor access to the Azure portal. From the **Sites and sensors** page, select **More actions > Download sensor endpoint details** for the full list of endpoints required to access the Azure portal.
For more information, see [Networking requirements](how-to-set-up-your-network.md#networking-requirements) and [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
This procedure describes how to manually download the new sensor software versio
1. On your sensor console, select **System Settings** > **Sensor management** > **Software Update**.
-1. On the **Software Update** pane on the right, select **Upload file**, and then navigate to and select your downloaded `legacy-sensor-secured-patcher-<Version number>.tar` file.
+1. On the **Software Update** pane on the right, select **Upload file**, and then navigate to and select your downloaded `legacy-sensor-secured-patcher-<Version number>.tar` file. For example:
:::image type="content" source="media/how-to-manage-individual-sensors/upgrade-pane-v2.png" alt-text="Screenshot of the Software Update pane on the sensor." lightbox="media/how-to-manage-individual-sensors/upgrade-pane-v2.png"::: The update process starts, and may take about 30 minutes. During your upgrade, the system is rebooted twice.
- Sign in when prompted, and then return to the **System Settings** > **Sensor management** > **Software Update** pane to confirm that the new version is listed.
+ Sign in when prompted, and then return to the **System Settings** > **Sensor management** > **Software Update** pane to confirm that the new version is listed. For example:
:::image type="content" source="media/how-to-manage-individual-sensors/defender-for-iot-version.png" alt-text="Screenshot of the upgrade version that appears after you sign in." lightbox="media/how-to-manage-individual-sensors/defender-for-iot-version.png":::
The sensor update process won't succeed if you don't update the on-premises mana
**To update several sensors**:
-1. On the Azure portal, go to **Defender for IoT** > **Updates**. Under **Sensors**, select **Download** and save the file.
+1. On the Azure portal, go to **Defender for IoT** > **Updates**. Under **Sensors**, select **Download** and save the file. For example:
:::image type="content" source="media/how-to-manage-individual-sensors/updates-page.png" alt-text="Screenshot of the Updates page of Defender for IoT." lightbox="media/how-to-manage-individual-sensors/updates-page.png":::
The sensor update process won't succeed if you don't update the on-premises mana
Also make sure that sensors you *don't* want to update are *not* selected.
- Save your changes when you're finished selecting sensors to update.
-
+ Save your changes when you're finished selecting sensors to update. For example:
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/automatic-updates.png" alt-text="Screenshot of on-premises management console with Automatic Version Updates selected." lightbox="media/how-to-manage-sensors-from-the-on-premises-management-console/automatic-updates.png":::
This procedure is relevant only if you're updating sensors from software version
1. Select the site where you want to update your sensor, and then browse to the sensor you want to update.
-1. Expand the row for your sensor, select the options **...** menu on the right of the row, and then select **Prepare to update to 22.x**.
+1. Expand the row for your sensor, select the options **...** menu on the right of the row, and then select **Prepare to update to 22.x**. For example:
:::image type="content" source="media/how-to-manage-sensors-on-the-cloud/prepare-to-update.png" alt-text="Screenshot of the Prepare to update option." lightbox="media/how-to-manage-sensors-on-the-cloud/prepare-to-update.png":::
For more information, see:
- [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md) - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) - [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md)-- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
defender-for-iot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/workbooks.md
Title: Use Azure Monitor workbooks in Microsoft Defender for IoT description: Learn how to view and create Azure Monitor workbooks for Defender for IoT data. Previously updated : 06/02/2022 Last updated : 09/04/2022 # Use Azure Monitor workbooks in Microsoft Defender for IoT
To view out-of-the-box workbooks created by Microsoft, or other workbooks alread
1. In the Azure portal, go to **Defender for IoT** and select **Workbooks** on the left.
- :::image type="content" source="media/release-notes/workbooks.png" alt-text="Screenshot of the new Workbooks page." lightbox="media/release-notes/workbooks.png":::
+ :::image type="content" source="media/workbooks/workbooks.png" alt-text="Screenshot of the Workbooks page." lightbox="media/release-notes/workbooks.png":::
1. Modify your filtering options if needed, and select a workbook to open it.
Defender for IoT provides the following workbooks out-of-the-box:
- **Sensor health**. Displays data about your sensor health, such as the sensor console software versions installed on your sensors. - **Alerts**. Displays data about alerts occurring on your sensors, including alerts by sensor, alert types, recent alerts generated, and more. - **Devices**. Displays data about your device inventory, including devices by vendor, subtype, and new devices identified.-
+- **Vulnerabilities**. Displays data about the Vulnerabilities detected in OT devices across your network. Select an item in the **Device vulnerabilities**, **Vulnerable devices**, or **Vulnerable components** tables to view related information in the tables on the right.
## Create custom workbooks
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md
While designing models to reflect the entities in your environment, it can be us
[!INCLUDE [Azure Digital Twins: validate models info](../../includes/digital-twins-validate.md)]
-### Use modeling tools
+### Upload and delete models in bulk
-There are several sample projects available that you can use to simplify dealing with models and ontologies. They're located in this repository: [Tools for Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-tools).
+Here are two sample projects that can simplify dealing with multiple models at once:
+* [Model uploader](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#uploadmodels): Once you're finished creating, extending, or selecting your models, you need to upload them to your Azure Digital Twins instance to make them available for use in your solution. If you have many models to upload, or if they have many interdependencies that would make ordering individual uploads complicated, you can use this model uploader sample to upload many models at once.
+* [Model deleter](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#deletemodels): This sample can be used to delete all models in an Azure Digital Twins instance at once. It contains recursive logic to handle model dependencies through the deletion process.
-Here are some of the tools included in the sample repository:
+### Visualize models
-| Link to tool | Description |
-| | |
-| [Model uploader](https://github.com/Azure/opendigitaltwins-tools/tree/master/ADTTools#uploadmodels) | Once you're finished creating, extending, or selecting your models, you need to upload them to your Azure Digital Twins instance to make them available for use in your solution. However, if you have many models to uploadΓÇöor if they have many interdependencies that would make ordering individual uploads complicatedΓÇöyou can use this model uploader sample to upload many models at once. |
-| [Model visualizer](https://github.com/Azure/opendigitaltwins-tools/tree/master/AdtModelVisualizer) | Once you have uploaded models into your Azure Digital Twins instance, you can view the models in your Azure Digital Twins instance, including any inheritance and model relationships, using the model visualizer sample. This sample is currently in a draft state. We encourage the digital twins development community to extend and contribute to the sample. |
+Once you have uploaded models into your Azure Digital Twins instance, you can use [Azure Digital Twins Explorer](http://explorer.digitaltwins.azure.net/) to view them. The explorer contains a list of all models in the instance, as well as a **model graph** that illustrates how they relate to each other, including any inheritance and model relationships.
+
+Here's an example of what a model graph might look like:
++
+For more information about the model experience in Azure Digital Twins Explorer, see [Explore models and the Model Graph](how-to-use-azure-digital-twins-explorer.md#explore-models-and-the-model-graph).
## Next steps
digital-twins Concepts Ontologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies.md
No matter which strategy you choose for integrating an ontology into Azure Digit
Reading this series of articles will guide you in how to use your models in your Azure Digital Twins instance. >[!TIP]
-> You can visualize the models in your ontology using the [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md) or [Azure Digital Twins Model Visualizer](https://github.com/Azure/opendigitaltwins-tools/tree/master/AdtModelVisualizer).
+> You can visualize the models in your ontology using the [model graph](how-to-use-azure-digital-twins-explorer.md#explore-models-and-the-model-graph) in Azure Digital Twins Explorer.
## Next steps
dms Faq Mysql Single To Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/faq-mysql-single-to-flex.md
Title: FAQ about using Azure Database Migration Service for Azure Database MySQL
description: Frequently asked questions about using Azure Database Migration Service to perform database migrations from Azure Database MySQL Single Server to Flexible Server. -+ -+ Previously updated : 09/08/2022 Last updated : 09/17/2022 # Frequently Asked Questions (FAQs)
dms Known Issues Azure Sql Db Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-db-managed-instance-online.md
Last updated 02/20/2020
Known issues and limitations that are associated with online migrations from SQL Server to Azure SQL Managed Instance are described below. > [!IMPORTANT]
-> With online migrations of SQL Server to Azure SQL Database, migration of SQL_variant data types is not supported.
+> With online migrations of SQL Server to Azure SQL Managed Instance, migration of SQL_variant data types is not supported.
## Backup requirements
dms Tutorial Mysql Azure Single To Flex Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-offline-portal.md
Title: "Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server offline using DMS via the Azure portal" description: "Learn to perform an offline migration from Azure Database for MySQL - Single Server to Flexible Server by using Azure Database Migration Service."--+++ - Last updated 09/17/2022
dms Tutorial Mysql Azure Single To Flex Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-online-portal.md
Title: "Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server online using DMS via the Azure portal" description: "Learn to perform an online migration from Azure Database for MySQL - Single Server to Flexible Server by using Azure Database Migration Service."--+++ - Previously updated : 09/16/2022 Last updated : 09/17/2022
dns Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/cli-samples.md
Title: Azure CLI samples for DNS - Azure DNS description: With this sample, use Azure CLI to create DNS zones and records in Azure DNS. -+ Previously updated : 09/20/2019- Last updated : 09/27/2022+
dns Delegate Subdomain Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/delegate-subdomain-ps.md
Title: Delegate a subdomain - Azure PowerShell - Azure DNS description: With this learning path, get started delegating an Azure DNS subdomain using Azure PowerShell. -+ Previously updated : 05/03/2021- Last updated : 09/27/2022+
dns Delegate Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/delegate-subdomain.md
Title: Delegate a subdomain - Azure DNS description: With this learning path, get started delegating an Azure DNS subdomain. -+ Previously updated : 05/03/2021- Last updated : 09/27/2022+ # Delegate an Azure DNS subdomain
dns Dns Alerts Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alerts-metrics.md
Azure DNS provides the following metrics to Azure Monitor for your DNS zones:
For more information, see [metrics definition](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkdnszones).
->[!NOTE]
+> [!NOTE]
> At this time, these metrics are only available for Public DNS zones hosted in Azure DNS. If you have Private Zones hosted in Azure DNS, these metrics won't provide data for those zones. In addition, the metrics and alerting feature is only supported in Azure Public cloud. Support for sovereign clouds will follow at a later time. The most granular element that you can see metrics for is a DNS zone. You currently can't see metrics for individual resource records within a zone.
To view this metric, select **Metrics** explorer experience from the **Monitor**
## Alerts in Azure DNS
-Azure Monitor has alerting that you can configure for each available metric value. See [Azure Monitor alerts](../azure-monitor/alerts/alerts-metric.md) for more information.
+Azure Monitor has alerting that you can configure for each available metric value. For more information, see [Azure Monitor alerts](../azure-monitor/alerts/alerts-metric.md).
1. To configure alerting for Azure DNS zones, select **Alerts** from *Monitor* page in the Azure portal. Then select **+ New alert rule**.
dns Dns Alias Appservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alias-appservice.md
Title: Host load-balanced Azure web apps at the zone apex description: Use an Azure DNS alias record to host load-balanced web apps at the zone apex -+ Previously updated : 04/27/2021- Last updated : 09/27/2022+ # Host load-balanced Azure web apps at the zone apex
dns Dns Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alias.md
Title: Alias records overview - Azure DNS description: In this article, learn about support for alias records in Microsoft Azure DNS. -+ Previously updated : 04/23/2021- Last updated : 09/27/2022+ # Azure DNS alias records overview
An alias record set is supported for the following record types in an Azure DNS
- CNAME > [!NOTE]
-> If you intend to use an alias record for the A or AAAA record types to point to an [Azure Traffic Manager profile](../traffic-manager/quickstart-create-traffic-manager-profile.md) you must make sure that the Traffic Manager profile has only [external endpoints](../traffic-manager/traffic-manager-endpoint-types.md#external-endpoints). You must provide the IPv4 or IPv6 address for external endpoints in Traffic Manager. You can't use fully-qualified domain names (FQDNs) in endpoints. Ideally, use static IP addresses.
+> If you intend to use an alias record for the A or AAAA record types to point to an [Azure Traffic Manager profile](../traffic-manager/quickstart-create-traffic-manager-profile.md) you must make sure that the Traffic Manager profile has only [external endpoints](../traffic-manager/traffic-manager-endpoint-types.md#external-endpoints). You must provide the IPv4 or IPv6 address for external endpoints in Traffic Manager. You can't use fully qualified domain names (FQDNs) in endpoints. Ideally, use static IP addresses.
## Capabilities
dns Dns Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-custom-domain.md
Title: Integrate Azure DNS with your Azure resources - Azure DNS description: In this article, learn how to use Azure DNS along to provide DNS for your Azure resources. -+ Previously updated : 12/08/2021- Last updated : 09/27/2022+ # Use Azure DNS to provide custom domain settings for an Azure service
dns Dns Delegate Domain Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-delegate-domain-azure-dns.md
Title: 'Tutorial: Host your domain in Azure DNS' description: In this tutorial, you learn how to configure Azure DNS to host your DNS zones using Azure portal. -+ Previously updated : 06/10/2022- Last updated : 09/27/2022+ #Customer intent: As an experienced network administrator, I want to configure Azure DNS, so I can host DNS zones.
dns Dns Domain Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-domain-delegation.md
Title: Azure DNS delegation overview description: Understand how to change domain delegation and use Azure DNS name servers to provide domain hosting. -+ Previously updated : 04/19/2021- Last updated : 09/27/2022+
dns Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-for-azure-services.md
Title: Use Azure DNS with other Azure services
description: In this learning path, get started on how to use Azure DNS to resolve names for other Azure services documentationcenter: na-+ tags: azure dns
na Previously updated : 05/03/2021- Last updated : 09/27/2022+ # How Azure DNS works with other Azure services
dns Dns Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-bicep.md
Title: 'Quickstart: Create an Azure DNS zone and record - Bicep'
description: Learn how to create a DNS zone and record in Azure DNS. This is a step-by-step quickstart to create and manage your first DNS zone and record using Bicep. -- Previously updated : 03/21/2022++ Last updated : 09/27/2022
dns Dns Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-template.md
Title: 'Quickstart: Create an Azure DNS zone and record - Azure Resource Manager template (ARM template)'
-description: Learn how to create a DNS zone and record in Azure DNS. This is a step-by-step quickstart to create and manage your first DNS zone and record using Azure Resource Manager template (ARM template).
+description: Learn how to create a DNS zone and record in Azure DNS. This article is a step-by-step quickstart to create and manage your first DNS zone and record using Azure Resource Manager template (ARM template).
Previously updated : 6/2/2021 Last updated : 09/27/2022
The host name `www.2lwynbseszpam.azurequickstart.org` resolves to `1.2.3.4` and
## Clean up resources
-When you no longer need the resources that you created with the DNS zone, delete the resource group. This removes the DNS zone and all the related resources.
+When you no longer need the resources that you created with the DNS zone, delete the resource group. This action removes the DNS zone and all the related resources.
To delete the resource group, call the `Remove-AzResourceGroup` cmdlet:
dns Dns Getstarted Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-cli.md
Title: 'Quickstart: Create an Azure DNS zone and record - Azure CLI'
description: Quickstart - Learn how to create a DNS zone and record in Azure DNS. This is a step-by-step guide to create and manage your first DNS zone and record using the Azure CLI. -+ Previously updated : 10/20/2020- Last updated : 09/27/2022+ #Customer intent: As an administrator or developer, I want to learn how to configure Azure DNS using the Azure CLI so I can use Azure DNS for my name resolution.
az network dns zone create -g MyResourceGroup -n contoso.xyz
To create a DNS record, use the `az network dns record-set [record type] add-record` command. For help on A records, see `azure network dns record-set A add-record -h`.
-The following example creates a record with the relative name "www" in the DNS Zone "contoso.xyz" in the resource group "MyResourceGroup". The fully-qualified name of the record set is "www.contoso.xyz". The record type is "A", with IP address "10.10.10.10", and a default TTL of 3600 seconds (1 hour).
+The following example creates a record with the relative name "www" in the DNS Zone "contoso.xyz" in the resource group "MyResourceGroup". The fully qualified name of the record set is "www.contoso.xyz". The record type is "A", with IP address "10.10.10.10", and a default TTL of 3600 seconds (1 hour).
```azurecli az network dns record-set a add-record -g MyResourceGroup -z contoso.xyz -n www -a 10.10.10.10
dns Dns Getstarted Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-portal.md
Title: 'Quickstart: Create a DNS zone and record - Azure portal'
description: Use this step-by-step quickstart guide to learn how to create an Azure DNS zone and record using the Azure portal. -- Previously updated : 04/23/2021++ Last updated : 09/27/2022
dns Dns Getstarted Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-powershell.md
Title: 'Quickstart: Create an Azure DNS zone and record - Azure PowerShell'
description: Learn how to create a DNS zone and record in Azure DNS. This is a step-by-step quickstart to create and manage your first DNS zone and record using Azure PowerShell. -- Previously updated : 07/21/2022++ Last updated : 09/27/2022
dns Dns Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-import-export.md
Title: Import and export a domain zone file - Azure CLI
description: Learn how to import and export a DNS (Domain Name System) zone file to Azure DNS by using Azure CLI -+ Previously updated : 04/29/2021- Last updated : 09/27/2022+
dns Dns Operations Dnszones Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-dnszones-cli.md
Title: Manage DNS zones in Azure DNS - Azure CLI | Microsoft Docs
description: You can manage DNS zones using Azure CLI. This article shows how to update, delete, and create DNS zones on Azure DNS. documentationcenter: na-+ ms.devlang: azurecli na Previously updated : 04/28/2021- Last updated : 09/27/2022+
dns Dns Operations Dnszones Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-dnszones-portal.md
Title: Manage DNS zones in Azure DNS - Azure portal | Microsoft Docs
description: You can manage DNS zones using the Azure portal. This article describes how to update, delete, and create DNS zones on Azure DNS documentationcenter: na-+ na Previously updated : 04/28/2021- Last updated : 09/27/2022+ # How to manage DNS Zones in the Azure portal
dns Dns Operations Dnszones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-dnszones.md
Title: Manage DNS zones in Azure DNS - PowerShell | Microsoft Docs
description: You can manage DNS zones using Azure PowerShell. This article describes how to update, delete, and create DNS zones on Azure DNS documentationcenter: na-+ na Previously updated : 04/27/2021- Last updated : 09/27/2022+
$zone.Tags.Add("status","approved")
Set-AzDnsZone -Zone $zone ```
-When using `Set-AzDnsZone` with a $zone object, [Etag checks](dns-zones-records.md#etags) are used to ensure concurrent changes aren't overwritten. You can use the optional `-Overwrite` switch to suppress these checks.
+When you use `Set-AzDnsZone` with a $zone object, [Etag checks](dns-zones-records.md#etags) are used to ensure concurrent changes aren't overwritten. You can use the optional `-Overwrite` switch to suppress these checks.
## Delete a DNS Zone
dns Dns Operations Recordsets Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets-cli.md
Title: Manage DNS records in Azure DNS using the Azure CLI description: Managing DNS record sets and records on Azure DNS when hosting your domain on Azure DNS.-+ ms.assetid: 5356a3a5-8dec-44ac-9709-0c2b707f6cb5 ms.devlang: azurecli Previously updated : 04/28/2021- Last updated : 09/27/2022+ # Manage DNS records and recordsets in Azure DNS using the Azure CLI
To remove a DNS record from an existing record set, use `az network dns record-s
This command deletes a DNS record from a record set. If the last record in a record set is deleted, the record set itself is also deleted. To keep the empty record set instead, use the `--keep-empty-record-set` option.
-When using the `az network dns record-set <record-type> add-record` command, you need to specify the record getting deleted and the zone to delete from. These parameters are described in [Create a DNS record](#create-a-dns-record) and [Create records of other types](#create-records-of-other-types) above.
+When you use the `az network dns record-set <record-type> add-record` command, you need to specify the record getting deleted and the zone to delete from. These parameters are described in [Create a DNS record](#create-a-dns-record) and [Create records of other types](#create-records-of-other-types) above.
The following example deletes the A record with value '1.2.3.4' from the record set named *www* in the zone *contoso.com*, in the resource group *MyResourceGroup*.
dns Dns Operations Recordsets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets-portal.md
Title: Manage DNS record sets and records with Azure DNS description: Azure DNS provides the capability to manage DNS record sets and records when hosting your domain. -+ Previously updated : 04/28/2021- Last updated : 09/27/2022+ # Manage DNS records and record sets by using the Azure portal
dns Dns Operations Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets.md
Title: Manage DNS records in Azure DNS using Azure PowerShell | Microsoft Docs
description: Managing DNS record sets and records on Azure DNS when hosting your domain on Azure DNS. All PowerShell commands for operations on record sets and records. documentationcenter: na-+ na Previously updated : 04/28/2021- Last updated : 09/27/2022+ # Manage DNS records and recordsets in Azure DNS using Azure PowerShell
The steps for modifying an existing record set are similar to the steps you take
* Changing the record set metadata and time to live (TTL) 3. Commit your changes by using the `Set-AzDnsRecordSet` cmdlet. This *replaces* the existing record set in Azure DNS with the record set specified.
-When using `Set-AzDnsRecordSet`, [Etag checks](dns-zones-records.md#etags) are used to ensure concurrent changes aren't overwritten. You can use the optional `-Overwrite` switch to suppress these checks.
+When you use the `Set-AzDnsRecordSet` command, [Etag checks](dns-zones-records.md#etags) are used to ensure concurrent changes aren't overwritten. You can use the optional `-Overwrite` switch to suppress these checks.
### To update a record in an existing record set
dns Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-overview.md
Title: What is Azure DNS? description: Overview of DNS hosting service on Microsoft Azure. Host your domain on Microsoft Azure.-+ Previously updated : 4/22/2021- Last updated : 09/27/2022+ #Customer intent: As an administrator, I want to evaluate Azure DNS so I can determine if I want to use it instead of my current DNS service.
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
description: In this quickstart, you create and test a private DNS resolver in A
Previously updated : 06/02/2022 Last updated : 09/27/2022
Next, add a virtual network to the resource group that you created, and configur
5. Select the **Outbound Endpoints** tab, select **Add an endpoint**, and then enter a name next to **Endpoint name** (ex: myoutboundendpoint). 6. Next to **Subnet**, select the outbound endpoint subnet you created (ex: snet-outbound, 10.1.1.0/28) and then select **Save**. 7. Select the **Ruleset** tab, select **Add a ruleset**, and enter the following:
- - Ruleset name: Enter a name for your ruleset (ex: myruleset).
+ - Ruleset name: Enter a name for your ruleset (ex: **myruleset**).
- Endpoints: Select the outbound endpoint that you created (ex: myoutboundendpoint). 8. Under **Rules**, select **Add** and enter your conditional DNS forwarding rules. For example: - Rule name: Enter a rule name (ex: contosocom).
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 09/20/2022 Last updated : 09/27/2022
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 09/22/2022 Last updated : 09/27/2022 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
dns Dns Protect Private Zones Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-protect-private-zones-recordsets.md
Previously updated : 05/07/2021 Last updated : 09/27/2022 ms.devlang: azurecli
dns Dns Protect Zones Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-protect-zones-recordsets.md
Previously updated : 05/05/2021 Last updated : 09/27/2022 ms.devlang: azurecli
dns Dns Reverse Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-for-azure-services.md
Title: Reverse DNS for Azure services - Azure DNS
description: With this learning path, get started configuring reverse DNS lookups for services hosted in Azure. documentationcenter: na-+ na Previously updated : 04/29/2021- Last updated : 09/27/2022+
dns Dns Reverse Dns Hosting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-hosting.md
Title: Host reverse DNS lookup zones in Azure DNS description: Learn how to use Azure DNS to host the reverse DNS lookup zones for your IP ranges-+ Previously updated : 04/29/2021- Last updated : 09/27/2022+ ms.devlang: azurecli
dns Dns Reverse Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-overview.md
Title: Overview of reverse DNS in Azure - Azure DNS
description: In this learning path, get started learning how reverse DNS works and how it can be used in Azure documentationcenter: na-+ na Previously updated : 04/26/2021- Last updated : 09/27/2022+ # Overview of reverse DNS and support in Azure
dns Dns Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-sdk.md
description: In this learning path, get started creating DNS zones and record sets in Azure DNS by using the .NET SDK. documentationcenter: na-+ ms.assetid: eed99b87-f4d4-4fbf-a926-263f7e30b884
ms.devlang: csharp
na Previously updated : 05/05/2021- Last updated : 09/27/2022+
dns Dns Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-troubleshoot.md
Title: Troubleshooting guide - Azure DNS description: In this learning path, get started troubleshooting common issues with Azure DNS -+ Previously updated : 11/10/2021- Last updated : 09/27/2022+ # Azure DNS troubleshooting guide
dns Dns Web Sites Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-web-sites-custom-domain.md
Title: 'Tutorial: Create custom Azure DNS records for a web app' description: In this tutorial, you learn how to create custom domain DNS records for web apps using Azure DNS. -+ Previously updated : 06/10/2022- Last updated : 09/27/2022+ #Customer intent: As an experienced network administrator, I want to create DNS records in Azure DNS, so I can host a web app in a custom domain.
dns Dns Zones Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-zones-records.md
Title: DNS Zones and Records overview - Azure DNS description: Overview of support for hosting DNS zones and records in Microsoft Azure DNS.-+ ms.assetid: be4580d7-aa1b-4b6b-89a3-0991c0cda897 Previously updated : 04/20/2021- Last updated : 09/27/2022+ # Overview of DNS zones and records
TXT records are used to map domain names to arbitrary text strings. They're used
The DNS standards permit a single TXT record to contain multiple strings, each of which may be up to 255 characters in length. Where multiple strings are used, they are concatenated by clients and treated as a single string.
-When calling the Azure DNS REST API, you need to specify each TXT string separately. When using the Azure portal, PowerShell or CLI interfaces you should specify a single string per record, which is automatically divided into 255-character segments if necessary.
+When calling the Azure DNS REST API, you need to specify each TXT string separately. When you use the Azure portal, PowerShell, or CLI interfaces, you should specify a single string per record. This string is automatically divided into 255-character segments if necessary.
The multiple strings in a DNS record shouldn't be confused with the multiple TXT records in a TXT record set. A TXT record set can contain multiple records, *each of which* can contain multiple strings. Azure DNS supports a total string length of up to 1024 characters in each TXT record set (across all records combined).
dns Private Dns Autoregistration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-autoregistration.md
Title: What is auto registration feature in Azure DNS private zones? description: Overview of auto registration feature in Azure DNS private zones. -+ Previously updated : 04/26/2021- Last updated : 09/27/2022+ # What is the auto registration feature in Azure DNS private zones?
dns Private Dns Getstarted Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-cli.md
Title: Quickstart - Create an Azure private DNS zone using the Azure CLI description: In this quickstart, you create and test a private DNS zone and record in Azure DNS. This is a step-by-step guide to create and manage your first private DNS zone and record using Azure CLI. -+ Previously updated : 05/23/2022- Last updated : 09/27/2022+ #Customer intent: As an experienced network administrator, I want to create an Azure private DNS zone, so I can resolve host names on my private virtual networks.
dns Private Dns Getstarted Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-portal.md
description: In this quickstart, you create and test a private DNS zone and reco
Previously updated : 05/18/2022 Last updated : 09/27/2022
dns Private Dns Getstarted Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-powershell.md
Title: Quickstart - Create an Azure private DNS zone using Azure PowerShell description: In this quickstart, you learn how to create and manage your first private DNS zone and record using Azure PowerShell. -- Previously updated : 05/23/2022++ Last updated : 09/27/2022
dns Private Dns Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-import-export.md
description: Learn how to import and export a DNS zone file to Azure private DN
Previously updated : 03/16/2021 Last updated : 09/27/2022
dns Private Dns Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-migration-guide.md
Title: Migrating legacy Azure DNS private zones to the new resource model
-description: This guide provides step by step instruction on how to migrate legacy private DNS zones to the latest resource model
+description: This guide provides step by step instruction on how to migrate legacy private DNS zones to latest resource model
Previously updated : 09/08/2022 Last updated : 09/27/2022
dns Private Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-overview.md
Previously updated : 09/20/2022 Last updated : 09/27/2022 #Customer intent: As an administrator, I want to evaluate Azure Private DNS so I can determine if I want to use it instead of my current DNS service.
dns Private Dns Privatednszone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-privatednszone.md
Previously updated : 09/08/2022 Last updated : 09/27/2022
dns Private Dns Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-scenarios.md
Title: Scenarios for Private Zones - Azure DNS description: In this article, learn about common scenarios for using Azure DNS Private Zones. -+ Previously updated : 04/27/2021- Last updated : 09/27/2022+ # Azure DNS private zones scenarios
dns Private Dns Virtual Network Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-virtual-network-links.md
Title: What is a virtual network link subresource of Azure DNS private zones description: Overview of virtual network link sub resource an Azure DNS private zone -+ Previously updated : 04/26/2021- Last updated : 09/27/2022+ # What is a virtual network link?
dns Dns Cli Create Dns Zone Record https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/dns-cli-create-dns-zone-record.md
Title: Create a DNS zone and record for a domain name - Azure CLI - Azure DNS description: This Azure CLI script example shows how to create a DNS zone and record for a domain name -+ Previously updated : 09/20/2019- Last updated : 09/27/2022+
dns Find Unhealthy Dns Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/find-unhealthy-dns-records.md
Title: Find unhealthy DNS records in Azure DNS - PowerShell script sample description: In this article, learn how to use an Azure PowerShell script to find unhealthy DNS records.-- Previously updated : 11/10/2021++ Last updated : 09/27/2022
dns Tutorial Alias Pip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-pip.md
Title: 'Tutorial: Create an Azure DNS alias record to refer to an Azure public IP address' description: In this tutorial, you learn how to configure an Azure DNS alias record to reference an Azure public IP address. -+ Previously updated : 06/20/2022- Last updated : 09/27/2022+ #Customer intent: As an experienced network administrator, I want to configure Azure an DNS alias record to refer to an Azure public IP address.
dns Tutorial Alias Rr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-rr.md
Title: 'Tutorial: Create an alias record to refer to a resource record in a zone' description: In this tutorial, you learn how to configure an alias record to reference a resource record within the zone.--++ Previously updated : 06/10/2022 Last updated : 09/27/2022 #Customer intent: As an experienced network administrator, I want to configure Azure an DNS alias record to refer to a resource record within the zone.
dns Tutorial Alias Tm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-tm.md
Title: 'Tutorial: Create an alias record to support apex domain name with Traffi
description: In this tutorial, you learn how to create and configure an Azure DNS alias record to support using your apex domain name with Traffic Manager. -+ Previously updated : 06/20/2022- Last updated : 09/27/2022+ #Customer intent: As an experienced network administrator, I want to configure Azure DNS alias records to use my apex domain name with Traffic Manager.
dns Tutorial Public Dns Zones Child https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-public-dns-zones-child.md
ms.assetid: be4580d7-aa1b-4b6b-89a3-0991c0cda897
Previously updated : 06/10/2022 Last updated : 09/27/2022
hdinsight Hdinsight Hadoop Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-add-storage.md
description: Learn how to add additional Azure Storage accounts to an existing H
Previously updated : 04/05/2022 Last updated : 09/29/2022 # Add additional storage accounts to HDInsight
After removing these keys and saving the configuration, you need to restart Oozi
### Storage firewall
-If you choose to secure your storage account with the **Firewalls and virtual networks** restrictions on **Selected networks**, be sure to enable the exception **Allow trusted Microsoft services...** so that HDInsight can access your storage account`.`
+If you choose to secure your storage account with the **Firewalls and virtual networks** restrictions on **Selected networks**, be sure to enable the exception **Allow trusted Microsoft services** so that HDInsight can access your storage account.
### Unable to access storage after changing key
iot-central Concepts Device Implementation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md
The [Azure IoT device SDKs](#device-sdks) include support for the IoT Plug and P
### Device model
-A device model is defined by using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl) modeling language. This language lets you define:
+A device model is defined by using the [DTDL V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) modeling language. This language lets you define:
- The telemetry the device sends. The definition includes the name and data type of the telemetry. For example, a device sends temperature telemetry as a double. - The properties the device reports to IoT Central. A property definition includes its name and data type. For example, a device reports the state of a valve as a Boolean.
iot-central Concepts Faq Apaas Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-apaas-paas.md
After the migration, devices aren't automatically deleted from the IoT Central a
So that you can seamlessly migrate devices from your IoT Central applications to PaaS solution, follow these guidelines: -- The device must be an IoT Plug and Play device that uses a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) model. IoT Central requires all devices to have a DTDL model. This simplifies the interoperability between an IoT PaaS solution and IoT Central.
+- The device must be an IoT Plug and Play device that uses a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) model. IoT Central requires all devices to have a DTDL model. This simplifies the interoperability between an IoT PaaS solution and IoT Central.
- The device must follow the [IoT Central data formats for telemetry, property, and commands](concepts-telemetry-properties-commands.md).
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-telemetry-properties-commands.md
Each example shows a snippet from the device model that defines the type and exa
> [!NOTE] > IoT Central accepts any valid JSON but it can only be used for visualizations if it matches a definition in the device model. You can export data that doesn't match a definition, see [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).
-The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) v2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
+The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
For sample device code that shows some of these payloads in use, see the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial.
iot-central Howto Create Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-rules.md
You can configure an IoT Central application to continuously export telemetry to
Your Event Hubs namespace looks like the following screenshot:
-```:::image type="content" source="media/howto-create-custom-rules/event-hubs-namespace.png" alt-text="Screenshot of Event Hubs namespace." border="false":::
## Define the function
This solution uses an Azure Functions app to send an email notification when the
The portal creates a default function called **HttpTrigger1**:
-```:::image type="content" source="media/howto-create-custom-rules/default-function.png" alt-text="Screenshot of Edit HTTP trigger function.":::
1. Replace the C# code with the following code:
To test the function in the portal, first choose **Logs** at the bottom of the c
The function log messages appear in the **Logs** panel:
-```:::image type="content" source="media/howto-create-custom-rules/function-app-logs.png" alt-text="Function log output":::
After a few minutes, the **To** email address receives an email with the following content:
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
To learn how to manage device templates by using the IoT Central UI, see [How to
A device template contains a device model, cloud property definitions, and view definitions. The REST API lets you manage the device model and cloud property definitions. Use the UI to create and manage views.
-The device model section of a device template specifies the capabilities of a device you want to connect to your application. Capabilities include telemetry, properties, and commands. The model is defined using [DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
+The device model section of a device template specifies the capabilities of a device you want to connect to your application. Capabilities include telemetry, properties, and commands. The model is defined using [DTDL V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
## Device templates REST API
iot-central Howto Use Location Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-location-data.md
The following screenshot shows a device template with examples of a device prope
:::image type="content" source="media/howto-use-location-data/location-device-template.png" alt-text="Screenshot showing location property definition in device template" lightbox="media/howto-use-location-data/location-device-template.png":::
-For reference, the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) definitions for these capabilities look like the following snippet:
+For reference, the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) definitions for these capabilities look like the following snippet:
```json {
iot-develop Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-architecture.md
The following diagram shows the key elements of an IoT Plug and Play solution:
## Model repository
-The [model repository](./concepts-model-repository.md) is a store for model and interface definitions. You define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl).
+The [model repository](./concepts-model-repository.md) is a store for model and interface definitions. You define models and interfaces using the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
The web UI lets you manage the models and interfaces.
iot-develop Concepts Convention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-convention.md
IoT Plug and Play devices should follow a set of conventions when they exchange
Devices can include [modules](../iot-hub/iot-hub-devguide-module-twins.md), or be implemented in an [IoT Edge module](../iot-edge/about-iot-edge.md) hosted by the IoT Edge runtime.
-You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language v2 (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) _model_. There are two types of model referred to in this article:
+You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) _model_. There are two types of model referred to in this article:
- **No component** - A model with no components. The model declares telemetry, properties, and commands as top-level properties in the contents section of the main interface. In the Azure IoT explorer tool, this model appears as a single _default component_. - **Multiple components** - A model composed of two or more interfaces. A main interface, which appears as the _default component_, with telemetry, properties, and commands. One or more interfaces declared as components with additional telemetry, properties, and commands.
On a device or module, multiple component interfaces use command names with the
Now that you've learned about IoT Plug and Play conventions, here are some additional resources: -- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl)
+- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md)
- [C device SDK](/azure/iot-hub/iot-c-sdk-ref/) - [IoT REST API](/rest/api/iothub/device) - [IoT Plug and Play modeling guide](concepts-modeling-guide.md)
iot-develop Concepts Developer Guide Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-device.md
This guide describes the basic steps required to create a device, module, or IoT
To build an IoT Plug and Play device, module, or IoT Edge module, follow these steps: 1. Ensure your device is using either the MQTT or MQTT over WebSockets protocol to connect to Azure IoT Hub.
-1. Create a [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md).
+1. Create a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md).
1. Update your device or module to announce the `model-id` as part of the device connection. 1. Implement telemetry, properties, and commands using the [IoT Plug and Play conventions](concepts-convention.md)
Once your device or module implementation is ready, use the [Azure IoT explorer]
Now that you've learned about IoT Plug and Play device development, here are some additional resources: -- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl)
+- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md)
- [C device SDK](/azure/iot-hub/iot-c-sdk-ref/) - [IoT REST API](/rest/api/iothub/device) - [Understand components in IoT Plug and Play models](concepts-modeling-guide.md)
iot-develop Concepts Developer Guide Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-service.md
The service SDKs let you access device information from a solution, such as a de
Now that you've learned about device modeling, here are some additional resources: -- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl)
+- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md)
- [C device SDK](/azure/iot-hub/iot-c-sdk-ref/) - [IoT REST API](/rest/api/iothub/device) - [IoT Plug and Play modeling guide](concepts-modeling-guide.md)
iot-develop Concepts Digital Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-digital-twin.md
# Understand IoT Plug and Play digital twins
-An IoT Plug and Play device implements a model described by the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) schema. A model describes the set of components, properties, commands, and telemetry messages that a particular device can have.
+An IoT Plug and Play device implements a model described by the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) schema. A model describes the set of components, properties, commands, and telemetry messages that a particular device can have.
-IoT Plug and Play uses DTDL version 2. For more information about this version, see the [Digital Twins Definition Language (DTDL) - version 2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) specification on GitHub.
+IoT Plug and Play uses DTDL version 2. For more information about this version, see the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) specification on GitHub.
> [!NOTE] > DTDL isn't exclusive to IoT Plug and Play. Other IoT services, such as [Azure Digital Twins](../digital-twins/overview.md), use it to represent entire environments such as buildings and energy networks.
iot-develop Concepts Model Parser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-parser.md
# Understand the digital twins model parser
-The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification](https://github.com/Azure/opendigitaltwins-dtdl). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a model defined in multiple files.
+The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a model defined in multiple files.
## Install the DTDL model parser
iot-develop Concepts Model Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-repository.md
# Device models repository
-The device models repository (DMR) enables device builders to manage and share IoT Plug and Play device models. The device models are JSON LD documents defined using the [Digital Twins Modeling Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
+The device models repository (DMR) enables device builders to manage and share IoT Plug and Play device models. The device models are JSON LD documents defined using the [Digital Twins Modeling Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md).
The DMR defines a pattern to store DTDL interfaces in a folder structure based on the device twin model identifier (DTMI). You can locate an interface in the DMR by converting the DTMI to a relative path. For example, the `dtmi:com:example:Thermostat;1` DTMI translates to `/dtmi/com/example/thermostat-1.json` and can be obtained from the public base URL `devicemodels.azure.com` at the URL [https://devicemodels.azure.com/dtmi/com/example/thermostat-1.json](https://devicemodels.azure.com/dtmi/com/example/thermostat-1.json).
iot-develop Concepts Modeling Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-modeling-guide.md
At the core of IoT Plug and Play, is a device _model_ that describes a device's
To learn more about how IoT Plug and Play uses device models, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md) and [IoT Plug and Play service developer guide](concepts-developer-guide-service.md).
-To define a model, you use the Digital Twins Definition Language (DTDL). DTDL uses a JSON variant called [JSON-LD](https://json-ld.org/). The following snippet shows the model for a thermostat device that:
+To define a model, you use the Digital Twins Definition Language (DTDL) V2. DTDL uses a JSON variant called [JSON-LD](https://json-ld.org/). The following snippet shows the model for a thermostat device that:
- Has a unique model ID: `dtmi:com:example:Thermostat;1`. - Sends temperature telemetry.
iot-develop Howto Convert To Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-convert-to-pnp.md
In summary, the sample implements the following capabilities:
## Design a model
-Every IoT Plug and Play device has a model that describes the features and capabilities of the device. The model uses the [Digital Twin Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) to describe the device capabilities.
+Every IoT Plug and Play device has a model that describes the features and capabilities of the device. The model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) to describe the device capabilities.
For a simple model that maps the existing capabilities of your device, use the *Telemetry*, *Property*, and *Command* DTDL elements.
iot-develop Howto Manage Digital Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-manage-digital-twin.md
At the time of writing, the digital twin API version is `2020-09-30`.
## Update a digital twin
-An IoT Plug and Play device implements a model described by [Digital Twins Definition Language v2 (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl). Solution developers can use the **Update Digital Twin API** to update the state of component and the properties of the digital twin.
+An IoT Plug and Play device implements a model described by [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl). Solution developers can use the **Update Digital Twin API** to update the state of component and the properties of the digital twin.
The IoT Plug and Play device used as an example in this article implements the [Temperature Controller model](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) with [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) components.
iot-develop Overview Iot Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/overview-iot-plug-and-play.md
IoT Plug and Play enables solution builders to integrate IoT devices with their
You can group these elements in interfaces to reuse across models to make collaboration easier and to speed up development.
-To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling.
+To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling.
There's no extra cost for using IoT Plug and Play and DTDL. Standard rates for [Azure IoT Hub](../iot-hub/about-iot-hub.md) and other Azure services remain the same.
iot-develop Tutorial Migrate Device To Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-migrate-device-to-module.md
This tutorial shows you how to connect a generic IoT Plug and Play [module](../iot-hub/iot-hub-devguide-module-twins.md).
-A device is an IoT Plug and Play device if it publishes its model ID when it connects to an IoT hub and implements the properties and methods described in the Digital Twins Definition Language (DTDL) model identified by the model ID. To learn more about how devices use a DTDL and model ID, see [IoT Plug and Play developer guide](./concepts-developer-guide-device.md). Modules use model IDs and DTDL models in the same way.
+A device is an IoT Plug and Play device if it publishes its model ID when it connects to an IoT hub and implements the properties and methods described in the Digital Twins Definition Language (DTDL) V2 model identified by the model ID. To learn more about how devices use a DTDL and model ID, see [IoT Plug and Play developer guide](./concepts-developer-guide-device.md). Modules use model IDs and DTDL models in the same way.
To demonstrate how to implement an IoT Plug and Play module, this tutorial shows you how to:
iot-edge How To Retrieve Iot Edge Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
The [Logger class in IoT Edge](https://github.com/Azure/iotedge/blob/master/edge
Use the **GetModuleLogs** direct method to retrieve the logs of an IoT Edge module. >[!TIP]
+>Use the `since` and `until` filter options to limit the range of logs retrieved. Calling this direct method without bounds retrieves all the logs which may be large, time consuming, or costly.
+>
>The IoT Edge troubleshooting page in the Azure portal provides a simplified experience for viewing module logs. For more information, see [Monitor and troubleshoot IoT Edge devices from the Azure portal](troubleshoot-in-portal.md). This method accepts a JSON payload with the following schema:
Use the **UploadModuleLogs** direct method to send the requested logs to a speci
::: moniker range=">=iotedge-2020-11" > [!NOTE]
+> Use the `since` and `until` filter options to limit the range of logs retrieved. Calling this direct method without bounds retrieves all the logs which may be large, time consuming, or costly.
+>
> If you wish to upload logs from a device behind a gateway device, you will need to have the [API proxy and blob storage modules](how-to-configure-api-proxy-module.md) configured on the top layer device. These modules route the logs from your lower layer device through your gateway device to your storage in the cloud. ::: moniker-end
iot-hub-device-update Device Update Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md
IoT Hub device twin example:
"deviceProperties": { "manufacturer": "contoso", "model": "virtual-vacuum-v1",
- "interfaceId": "dtmi:azure:iot:deviceUpdate;1",
+ "interfaceId": "dtmi:azure:iot:deviceUpdateModel;1",
"aduVer": "DU;agent/0.8.0-rc1-public-preview", "doVer": "DU;lib/v0.6.0+20211001.174458.c8c4051,DU;agent/v0.6.0+20211001.174418.c8c4051" },
iot-hub Monitor Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/monitor-iot-hub.md
The **Overview** page in the Azure portal for each IoT hub includes charts that
:::image type="content" source="media/monitor-iot-hub/overview-portal.png" alt-text="Default metric charts on IoT hub Overview page.":::
-Be aware that the message count value can be delayed by 1 minute, and that, for reasons having to do with the IoT Hub service infrastructure, the value can sometimes bounce between higher and lower values on refresh. This counter should only be incorrect for values accrued over the last minute.
+A correct message count value might be delayed by 1 minute. Due to the IoT Hub service infrastructure, the value can sometimes bounce between higher and lower values on refresh. This counter should be incorrect only for values accrued over the last minute.
-The information presented on the Overview pane is useful, but represents only a small amount of the monitoring data that is available for an IoT hub. Some monitoring data is collected automatically and is available for analysis as soon as you create your IoT hub. You can enable additional types of data collection with some configuration.
+The information presented on the **Overview pane** is useful, but represents only a small amount of monitoring data that's available for an IoT hub. Some monitoring data is collected automatically and available for analysis as soon as you create your IoT hub. You can enable other types of data collection with some configuration.
## What is Azure Monitor?
-Azure IoT Hub creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises.
+Azure IoT Hub monitors data using [Azure Monitor](../azure-monitor/overview.md), a full stack monitoring service. Azure Monitor can monitor your Azure resources and other cloud or on-premises resources.
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts: - What is Azure Monitor?-- Costs associated with monitoring - Monitoring data collected in Azure - Configuring data collection-- Standard tools in Azure for analyzing and alerting on monitoring data
+- Metrics and logs
+- Standard tools in Azure for analysis and insights
+- Alerts fired when monitoring data
-The following sections build on this article by describing the specific data gathered for Azure IoT Hub and providing examples for configuring data collection and analyzing this data with Azure tools.
+For more information on the metrics and logs created by Azure IoT Hub, see [Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md).
-## Monitoring data
+> [!IMPORTANT]
+> The events emitted by the IoT Hub service using Azure Monitor resource logs aren't guaranteed to be reliable or ordered. Some events might be lost or delivered out of order. Resource logs aren't intended to be real-time, so it may take several minutes for events to be logged to your choice of destination.
-Azure IoT Hub collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
+The rest of this article builds on the **Monitoring Azure resources with Azure Monitor** article by describing the specific data gathered for Azure IoT Hub. You'll see examples for configuring your data collection and how to analyze this data with Azure tools.
-See [Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md) for detailed information on the metrics and logs created by Azure IoT Hub.
+## Collection and routing
-> [!IMPORTANT]
-> The events emitted by the IoT Hub service using Azure Monitor resource logs are not guaranteed to be reliable or ordered. Some events might be lost or delivered out of order. Resource logs also aren't meant to be real-time, and it may take several minutes for events to be logged to your choice of destination.
+Platform metrics, the Activity log, and resource logs have unique collection, storage, and routing specifications.
-## Collection and routing
+* Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+* Resource logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
-Resource logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+* Metrics and logs can be routed to several locations including:
+ - The Azure Monitor Logs store via an associated Log Analytics workspace. There they can be analyzed using Log Analytics.
+ - Azure Storage for archiving and offline analysis
+ - An Event Hubs endpoint where they can be read by external applications, for example, third-party security information and event management (SIEM) tools.
-Metrics and logs can be routed to several locations including:
-- The Azure Monitor Logs store via an associated Log Analytics workspace. There they can be analyzed using Log Analytics.-- Azure Storage for archiving and offline analysis -- An Event Hubs endpoint where they can be read by external applications, for example, third-party SIEM tools.
+In the Azure portal from your IoT hub under **Monitoring**, you can select **Diagnostic settings** followed by **Add diagnostic setting** to create diagnostic settings scoped to the logs and platform metrics emitted by your IoT hub.
-In Azure portal, you can select **Diagnostic settings** under **Monitoring** on the left-pane of your IoT hub followed by **Add diagnostic setting** to create diagnostic settings scoped to the logs and platform metrics emitted by your IoT hub.
The following screenshot shows a diagnostic setting for routing the resource log type *Connection Operations* and all platform metrics to a Log Analytics workspace.
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure IoT Hub are listed under [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs). Be aware that events are emitted only for errors in some categories.
+For more information on creating a diagnostic setting using the Azure portal, CLI, or PowerShell, see [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md). When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure IoT Hub are listed under [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs). Events are emitted only for errors in some categories.
-When routing IoT Hub platform metrics to other locations, be aware that:
+When routing IoT Hub platform metrics to other locations:
-- The following platform metrics are not exportable via diagnostic settings: *Connected devices (preview)* and *Total devices (preview)*.
+- These platform metrics aren't exportable via diagnostic settings: *Connected devices* and *Total devices*.
-- Multi-dimensional metrics, for example some [routing metrics](monitor-iot-hub-reference.md#routing-metrics), are currently exported as flattened single dimensional metrics aggregated across dimension values. For more detail, see [Exporting platform metrics to other locations](../azure-monitor/essentials/metrics-supported.md#exporting-platform-metrics-to-other-locations).
+- Multi-dimensional metrics, for example some [routing metrics](monitor-iot-hub-reference.md#routing-metrics), are currently exported as flattened single dimensional metrics aggregated across dimension values. For more information, see [Exporting platform metrics to other locations](../azure-monitor/essentials/metrics-supported.md#exporting-platform-metrics-to-other-locations).
## Analyzing metrics
-You can analyze metrics for Azure IoT Hub with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+You can analyze metrics for Azure IoT Hub with metrics from other Azure services using metrics explorer. For more information on this tool, see [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md).
-In Azure portal, you can select **Metrics** under **Monitoring** on the left-pane of your IoT hub to open metrics explorer scoped, by default, to the platform metrics emitted by your IoT hub:
+To open metrics explorer, go to the Azure portal and open your IoT hub, then select **Metrics** under **Monitoring**. This explorer is scoped, by default, to the platform metrics emitted by your IoT hub.
For a list of the platform metrics collected for Azure IoT Hub, see [Metrics in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#metrics). For a list of the platform metrics collected for all Azure services, see [Supported metrics with Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
Data in Azure Monitor Logs is stored in tables where each table has its own set
To route data to Azure Monitor Logs, you must create a diagnostic setting to send resource logs or platform metrics to a Log Analytics workspace. To learn more, see [Collection and routing](#collection-and-routing).
-In Azure portal, you can select **Logs** under **Monitoring** on the left-pane of your IoT hub to perform Log Analytics queries scoped, by default, to the logs and metrics collected in Azure Monitor Logs for your IoT hub.
+To perform Log Analytics, go to the Azure portal and open your IoT hub, then select **Logs** under **Monitoring**. These Log Analytics queries are scoped, by default, to the logs and metrics collected in Azure Monitor Logs for your IoT hub.
For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Azure Monitor Logs tables in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#azure-monitor-logs-tables).
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). You can find the schema and categories of resource logs collected for Azure IoT Hub in [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs). Be aware that events are emitted only for errors in some categories.
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). You can find the schema and categories of resource logs collected for Azure IoT Hub in [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs). Events are emitted only for errors in some categories.
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform login Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do more complex queries using Log Analytics.
-When routing IoT Hub platform metrics to Azure Monitor Logs, be aware that:
+When routing IoT Hub platform metrics to Azure Monitor Logs:
-- The following platform metrics are not exportable via diagnostic settings: *Connected devices (preview)* and *Total devices (preview)*.
+- The following platform metrics aren't exportable via diagnostic settings: *Connected devices* and *Total devices*.
- Multi-dimensional metrics, for example some [routing metrics](monitor-iot-hub-reference.md#routing-metrics), are currently exported as flattened single dimensional metrics aggregated across dimension values. For more detail, see [Exporting platform metrics to other locations](../azure-monitor/essentials/metrics-supported.md#exporting-platform-metrics-to-other-locations).
-For some common queries with IoT Hub, see [Sample Kusto queries](#sample-kusto-queries). For detailed information on using Log Analytics queries, see [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
+For common queries with IoT Hub, see [Sample Kusto queries](#sample-kusto-queries). For more information on using Log Analytics queries, see [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
### SDK version in IoT Hub logs
-Some operations in IoT Hub resource logs return an `sdkVersion` property in their `properties` object. For these operations, when a device or backend app is using one of the Azure IoT SDKs, this property contains information about the SDK being used, the SDK version, and the platform on which the SDK is running. The following example shows the `sdkVersion` property emitted for a [`deviceConnect`](monitor-iot-hub-reference.md#connections) operation when using the Node.js device SDK: `"azure-iot-device/1.17.1 (node v10.16.0; Windows_NT 10.0.18363; x64)"`. Here's an example of the value emitted for the .NET (C#) SDK: `".NET/1.21.2 (.NET Framework 4.8.4200.0; Microsoft Windows 10.0.17763 WindowsProduct:0x00000004; X86)"`.
+Some operations in IoT Hub resource logs return an `sdkVersion` property in their `properties` object. For these operations, when a device or backend app is using one of the Azure IoT SDKs, this property contains information about the SDK being used, the SDK version, and the platform on which the SDK is running.
+
+The following examples show the `sdkVersion` property emitted for a [`deviceConnect`](monitor-iot-hub-reference.md#connections) operation using:
+
+* The Node.js device SDK: `"azure-iot-device/1.17.1 (node v10.16.0; Windows_NT 10.0.18363; x64)"`
+* The .NET (C#) SDK: `".NET/1.21.2 (.NET Framework 4.8.4200.0; Microsoft Windows 10.0.17763 WindowsProduct:0x00000004; X86)"`.
The following table shows the SDK name used for different Azure IoT SDKs:
AzureDiagnostics
### Sample Kusto queries
-> [!IMPORTANT]
-> When you select **Logs** from the IoT hub menu, Log Analytics is opened with the query scope set to the current IoT hub. This means that log queries will only include data from that resource. If you want to run a query that includes data from other IoT hubs or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
+Use the following [Kusto](/azure/data-explorer/kusto/query/) queries to help you monitor your IoT hub.
-Following are queries that you can use to help you monitor your IoT hub.
+> [!IMPORTANT]
+> Selecting **Logs** from the **IoT Hub** menu opens **Log Analytics** and includes data solely from your IoT hub resource. For queries that include data from other IoT hubs or Azure services, select **Logs** from the [**Azure Monitor** menu](https://portal.azure.com/#view/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/~/logs). For more information, see [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md).
-- Connectivity Errors: Identify device connection errors.
+- **Connectivity Errors**: Identify device connection errors.
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
| where Category == "Connections" and Level == "Error" ``` -- Throttling Errors: Identify devices that made the most requests resulting in throttling errors.
+- **Throttling Errors**: Identify devices that made the most requests resulting in throttling errors.
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
| order by count_ desc ``` -- Dead Endpoints: Identify dead or unhealthy endpoints by the number times the issue was reported, as well as the reason why.
+- **Dead Endpoints**: Identify dead or unhealthy endpoints by the number of times the issue was reported and know the reason why.
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
| order by count_ desc ``` -- Error summary: Count of errors across all operations by type.
+- **Error summary**: Count of errors across all operations by type.
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
| summarize count() by ResultType, ResultDescription, Category, _ResourceId ``` -- Recently connected devices: List of devices that IoT Hub saw connect in the specified time period.
+- **Recently connected devices**: List of devices that IoT Hub saw connect in the specified time period.
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
| summarize max(TimeGenerated) by DeviceId, _ResourceId ``` -- Connection events for a specific device: All connection events logged for a specific device (*test-device*).
+- **Connection events for a specific device**: All connection events logged for a specific device (*test-device*).
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
| where DeviceId == "test-device" ``` -- SDK version of devices: List of devices and their SDK versions for device connections or device to cloud twin operations.
+- **SDK version of devices**: List of devices and their SDK versions for device connections or device to cloud twin operations.
```kusto AzureDiagnostics
Following are queries that you can use to help you monitor your IoT hub.
### Read logs from Azure Event Hubs
-After you set up event logging through diagnostics settings, you can create applications that read out the logs so that you can take action based on the information in them. This sample code retrieves logs from an event hub:
+After you set up event logging through diagnostics settings, you can create applications that read out the logs so that you can take action based on the information in them. The following sample code retrieves logs from an event hub.
```csharp class Program
class Program
Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
-When creating an alert rule based on platform metrics, be aware that for IoT Hub platform metrics that are collected in units of count, some aggregations may not be available or usable. To learn more, see [Supported aggregations in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#supported-aggregations).
+When you create an alert rule based on platform metrics (collected in units of count), some aggregations may not be available or usable. For more information, see [Supported aggregations](monitor-iot-hub-reference.md#supported-aggregations) in **Monitoring Azure IoT Hub data reference**.
## Monitor per-device disconnects with Event Grid
-Azure Monitor provides a metric, *Connected devices*, that you can use to monitor the number of devices connected to your IoT Hub and trigger an alert when number of connected devices drops below a threshold value. Azure Monitor also emits events in the [connections category](monitor-iot-hub-reference.md#connections) that you can use to monitor device connects, disconnects, and connection errors. While these may be sufficient for some scenarios, [Azure Event Grid](../event-grid/index.yml) provides a low-latency, per-device monitoring solution that you can use to track device connections for critical devices and infrastructure.
+Azure Monitor provides a metric, *Connected devices*, that you can use to monitor the number of devices connected to your IoT Hub. This metric triggers an alert when the number of connected devices drops below a threshold value. Azure Monitor also emits events in the [connections category](monitor-iot-hub-reference.md#connections) that you can use to monitor device connects, disconnects, and connection errors. While these events may be sufficient for some scenarios, [Azure Event Grid](../event-grid/index.yml) provides a low-latency, per-device monitoring solution that you can use to track device connections for critical devices and infrastructure.
-With Event Grid, you can subscribe to the IoT Hub [**DeviceConnected** and **DeviceDisconnected** events](iot-hub-event-grid.md#event-types) to trigger alerts and monitor device connection state. Event Grid provides much lower event latency than Azure Monitor, and you can monitor on a per-device basis, rather than for the total number of connected devices. These factors make Event Grid the preferred method for monitoring connections for critical devices and infrastructure. We highly recommend using Event Grid to monitor device connections in production environments.
+With Event Grid, you can subscribe to the IoT Hub [**DeviceConnected** and **DeviceDisconnected** events](iot-hub-event-grid.md#event-types) to trigger alerts and monitor device connection state. Event Grid provides a much lower event latency than Azure Monitor, so you can monitor on a per-device basis rather than for all connected devices. These factors make Event Grid the preferred method for monitoring connections for critical devices and infrastructure. We highly recommend using Event Grid to monitor device connections in production environments.
-For more detailed information about monitoring device connectivity with Event Grid and Azure Monitor, see [Monitor, diagnose, and troubleshoot device connectivity to Azure IoT Hub](iot-hub-troubleshoot-connectivity.md).
+For more information about monitoring device connectivity with Event Grid and Azure Monitor, see [Monitor, diagnose, and troubleshoot device connectivity to Azure IoT Hub](iot-hub-troubleshoot-connectivity.md).
## Next steps -- See [Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md) for a reference of the metrics, logs, and other important values created by [service name].
+- [Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md): a reference of the metrics, logs, and other important values created by IoT Hub.
-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md): monitoring Azure resources.
-- See [Monitor, diagnose, and troubleshoot device connectivity to Azure IoT Hub](iot-hub-troubleshoot-connectivity.md) for details on monitoring device connectivity.
+- [Monitor, diagnose, and troubleshoot device connectivity to Azure IoT Hub](iot-hub-troubleshoot-connectivity.md): monitoring device connectivity.
key-vault Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/backup-restore.md
az keyvault backup start --hsm-name mhsmdemo2 --storage-account-name mhsmdemobac
Full restore allows you to completely restore the contents of the HSM with a previous backup, including all keys, versions, attributes, tags, and role assignments. Everything currently stored in the HSM will be wiped out, and it will return to the same state it was in when the source backup was created. > [!IMPORTANT]
-> Full restore is a very destructive and disruptive operation. Therefore it is mandatory to have completed a full backup within last 30 minutes before a `restore` operation can be performed.
+> Full restore is a very destructive and disruptive operation. Therefore it is mandatory to have completed a full backup at least 30 minutes prior to a `restore` operation can be performed.
Restore is a data plane operation. The caller starting the restore operation must have permission to perform dataAction **Microsoft.KeyVault/managedHsm/restore/start/action**. The source HSM where the backup was created and the destination HSM where the restore will be performed **must** have the same Security Domain. See more [about Managed HSM Security Domain](security-domain.md).
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-powershell.md
In this quickstart, you will create and activate an Azure Key Vault Managed HSM
If you do not have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+The service is available in limited regions ΓÇô To learn more about availability, please see [Azure Dedicated HSM purshase options](https://azure.microsoft.com/pricing/details/azure-dedicated-hsm).
+ [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)] If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
Login-AzAccount
## Create a resource group
-A resource group is a logical container into which Azure resources are deployed and managed. Use the Azure PowerShell [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a resource group named *myResourceGroup* in the *westus3* location.
+A resource group is a logical container into which Azure resources are deployed and managed. Use the Azure PowerShell [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a resource group named *myResourceGroup* in the *eastus* location.
```azurepowershell-interactive
-New-AzResourceGroup -Name "myResourceGroup" -Location "westus3"
+New-AzResourceGroup -Name "myResourceGroup" -Location "eastus"
``` ## Get your principal ID
Use the Azure PowerShell [New-AzKeyVaultManagedHsm](/powershell/module/az.keyvau
- Your principal ID: Pass the Azure Active Directory principal ID that you obtained in the last section to the "Administrator" parameter. ```azurepowershell-interactive
-New-AzKeyVaultManagedHsm -Name "<your-unique-managed-hsm-name>" -ResourceGroupName "myResourceGroup" -Location "westus3" -Administrator "<your-principal-ID>"
+New-AzKeyVaultManagedHsm -Name "<your-unique-managed-hsm-name>" -ResourceGroupName "myResourceGroup" -Location "eastus" -Administrator "<your-principal-ID>"
``` > [!NOTE] > The create command can take a few minutes. Once it returns successfully you are ready to activate your HSM.
lab-services Classroom Labs Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-scenarios.md
Last updated 01/04/2022
[!INCLUDE [preview note](./includes/lab-services-new-update-note.md)]
-Azure Labs Services allows educators (teachers, professors, trainers, or teaching assistants, etc.) to quickly and easily create an online lab to provision pre-configured learning environments for the trainees. Each trainee would be able use identical and isolated environments for the training. Policies can be applied to ensure that the training environments are available to each trainee only when they need them and contain enough resources - such as virtual machines - required for the training.
+Azure Lab Services allows educators (teachers, professors, trainers, or teaching assistants, etc.) to quickly and easily create an online lab to provision pre-configured learning environments for the trainees. Each trainee would be able use identical and isolated environments for the training. Policies can be applied to ensure that the training environments are available to each trainee only when they need them and contain enough resources - such as virtual machines - required for the training.
![Lab](./media/classroom-labs-scenarios/classroom.png)
load-testing How To Test Secured Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-secured-endpoints.md
+
+ Title: Load test secured endpoints
+description: Learn how to load test secured endpoints with Azure Load Testing. Use shared secrets, credentials, or client certificates for load testing applications that require authentication.
+++++ Last updated : 09/28/2022+++
+# Load test secured endpoints with Azure Load Testing Preview
+
+In this article, you learn how to load test applications with Azure Load Testing Preview that require authentication. Azure Load Testing enables you to [authenticate with endpoints by using shared sec