Updates from: 04/21/2022 01:08:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-user-flow.md
Previously updated : 03/03/2022 Last updated : 04/10/2022
To add a Conditional Access policy:
| Include |License | Notes| ||||
- |**All users** | P1, P2 | If you choose to include **All Users**, this policy will affect all of your users. To be sure not to lock yourself out, exclude your administrative account by choosing **Exclude**, selecting **Directory roles**, and then selecting **Global Administrator** in the list. You can also select **Users and Groups** and then select your account in the **Select excluded users** list. |
+ |**All users** | P1, P2 | This policy will affect all of your users. To be sure not to lock yourself out, exclude your administrative account by choosing **Exclude**, selecting **Directory roles**, and then selecting **Global Administrator** in the list. You can also select **Users and Groups** and then select your account in the **Select excluded users** list. |
1. Select **Cloud apps or actions**, and then **Select apps**. Browse for your [relying party application](tutorial-register-applications.md). 1. Select **Conditions**, and then select from the following conditions. For example, select **Sign-in risk** and **High**, **Medium**, and **Low** risk levels.
To add a Conditional Access policy:
| **User risk** | P2 |User risk represents the probability that a given identity or account is compromised. | | **Sign-in risk** | P2 |Sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. | | **Device platforms** |Not supported |Characterized by the operating system that runs on a device. For more information, see [Device platforms](../active-directory/conditional-access/concept-conditional-access-conditions.md#device-platforms). |
- | **Locations** |P1,P2 |Named locations may include the public IPv4 network information, country or region, or unknown areas that don't map to specific countries or regions. For more information, see [Locations](../active-directory/conditional-access/concept-conditional-access-conditions.md#locations). |
+ | **Locations** |P1, P2 |Named locations may include the public IPv4 network information, country or region, or unknown areas that don't map to specific countries or regions. For more information, see [Locations](../active-directory/conditional-access/concept-conditional-access-conditions.md#locations). |
3. Under **Access controls**, select **Grant**. Then select whether to block or grant access:
To review the result of a Conditional Access event:
## Next steps
-[Customize the user interface in an Azure AD B2C user flow](customize-ui-with-html.md)
+[Customize the user interface in an Azure AD B2C user flow](customize-ui-with-html.md)
active-directory-b2c Tenant Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management.md
Title: Manage your Azure Active Directory B2C
-description: Learn how to manage your Azure Active Directory B2C tenant. Learn which Azure AD features are supported in Azure AD B2C, how to use administrator roles to manage resources, and how to add work accounts and guest users to your Azure AD B2C tenant.
+description: Learn how to manage your Azure Active Directory B2C tenant. Learn which Azure AD features are supported in Azure AD B2C, how to use administrator roles to manage resources, and how to add work accounts and guest users to your Azure AD B2C tenant, and how to manage emergency access accounts in Azure AD B2C.
Previously updated : 10/25/2021 Last updated : 04/20/2022
# Manage your Azure Active Directory B2C tenant
-In Azure Active Directory B2C (Azure AD B2C), a tenant represents your directory of consumer users. Each Azure AD B2C tenant is distinct and separate from any other Azure AD B2C tenant. An Azure AD B2C tenant is different than an Azure Active Directory tenant, which you may already have. In this article, you learn how to manage your Azure AD B2C tenant.
+In Azure Active Directory B2C (Azure AD B2C), a tenant represents your directory of consumer users. Each Azure AD B2C tenant is distinct and separate from any other Azure AD B2C tenant. An Azure AD B2C tenant is different than an Azure Active Directory (Azure AD) tenant, which you may already have. In this article, you learn how to manage your Azure AD B2C tenant.
## Prerequisites - If you haven't already created your own [Azure AD B2C Tenant](tutorial-create-tenant.md), create one now. You can use an existing Azure AD B2C tenant.
To create a new administrative account, follow these steps:
1. Copy the autogenerated password provided in the **Password** box. You'll need to give this password to the user to sign in for the first time. 1. Select **Create**.
-The user is created and added to your Azure AD B2C tenant. It's preferable to have at least one work account native to your Azure AD B2C tenant assigned the Global Administrator role. This account can be considered a *break-glass account*.
+The user is created and added to your Azure AD B2C tenant. It's preferable to have at least one work account native to your Azure AD B2C tenant assigned the Global Administrator role. This account can be considered a *break-glass account* or *[emergency access accounts](#manage-emergency-access-accounts-in-azure-ad-b2c)*.
+
+## Manage emergency access accounts in Azure AD B2C
+
+It's important that you prevent being accidentally locked out of your Azure Active Directory B2C (Azure AD B2C) organization because you can't sign in or activate another user's account as an administrator. You can mitigate the impact of accidental lack of administrative access by creating two or more *emergency access accounts* in your organization.
+
+When configuring these accounts, the following requirements need to be met:
+
+- The emergency access accounts shouldn't be associated with any individual user in the organization. Make sure that your accounts aren't connected with any employee-supplied mobile phones, hardware tokens that travel with individual employees, or other employee-specific credentials. This precaution covers instances where an individual employee is unreachable when the credential is needed. It's important to ensure that any registered devices are kept in a known, secure location that has multiple means of communicating with Azure AD B2C.
+
+- Use strong authentication for your emergency access accounts and make sure it doesnΓÇÖt use the same authentication methods as your other administrative accounts. For example, if your normal administrator account uses the Microsoft Authenticator app for strong authentication, use a FIDO2 security key for your emergency accounts.
+
+- The device or credential must not expire or be in scope of automated cleanup due to lack of use.
++
+### Create emergency access account
+
+Create two or more emergency access accounts. These accounts should be cloud-only accounts that use the *.onmicrosoft.com domain and that aren't federated or synchronized from an on-premises environment.
+
+Use the following steps to create an emergency access account:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as an existing Global Administrator. If you use your Azure AD account, make sure you're using the directory that contains your Azure AD B2C tenant:
+
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+
+1. Under **Azure services**, select **Azure AD B2C**. Or in the Azure portal, search for and select **Azure AD B2C**.
+
+1. In the left menu, under **Manage**, select **Users**.
+
+1. Select **+ New user**.
+
+1. Select **Create user**.
+
+1. Under **Identity**:
+
+ 1. For **User name**, enter a unique user name such as *emergency account*.
+
+ 1. For **Name**, enter a name such as *Emergency Account*
+
+1. Under **Password**, enter your unique password.
+
+1. Under **Groups and roles**
+
+ 1. Select **User**.
+
+ 1. In the pane that shows up, search for and select **Global administrator**, and then select **Select** button.
+
+1. Under **Settings**, select the appropriate **Usage location**.
+
+1. Select **Create**.
+
+1. [Store account credentials safely](../active-directory/roles/security-emergency-access.md#store-account-credentials-safely).
+
+1. [Monitor sign in and audit logs](../active-directory/roles/security-emergency-access.md#monitor-sign-in-and-audit-logs).
+
+1. [Validate accounts regularly](../active-directory/roles/security-emergency-access.md#validate-accounts-regularly).
+
+Once you create your emergency accounts, you need to do the following:
+
+- Make sure you [exclude at least one account from phone-based multi-factor authentication](../active-directory/roles/security-emergency-access.md#exclude-at-least-one-account-from-phone-based-multi-factor-authentication)
+
+- If you use [Conditional Access](conditional-access-user-flow.md), at least one emergency access account needs to be excluded from all Conditional Access policies.
## Invite an administrator (guest account)
To invite a user, follow these steps:
1. Select **Create**.
-An invitation email is sent to the user. The user needs to accept the invitation to be able to sign in.
+An invitation email is sent to the user. The user needs to accept the invitation to be able to sign in.
### Resend the invitation email
The user is deleted and no longer appears on the **Users - All users** page. The
## Protect administrative accounts
-It's recommended that you protect all administrator accounts with multifactor authentication (MFA) for more security. MFA is an identity verification process during sign-in that prompts the user for a more form of identification, such as a verification code on their mobile device or a request in their Microsoft Authenticator app.
+It's recommended that you protect all administrator accounts with multifactor authentication (MFA) for more security. MFA is an identity verification process during sign in that prompts the user for a more form of identification, such as a verification code on their mobile device or a request in their Microsoft Authenticator app.
-![Authentication methods in use at the sign-in screenshot](./media/tenant-management/sing-in-with-multi-factor-authentication.png)
+![Authentication methods in use at the sign in screenshot](./media/tenant-management/sing-in-with-multi-factor-authentication.png)
-You can enable [Azure AD security defaults](../active-directory/fundamentals/concept-fundamentals-security-defaults.md) to force all administrative accounts to use MFA.
+If you're not using [Conditional Access](conditional-access-user-flow.md), you can enable [Azure AD security defaults](../active-directory/fundamentals/concept-fundamentals-security-defaults.md) to force all administrative accounts to use MFA.
## Get your tenant name
active-directory Use Scim To Build Users And Groups Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
The steps here deploy the SCIM endpoint to a service by using [Visual Studio 201
![Screenshot that shows the Application settings window.](media/use-scim-to-build-users-and-groups-endpoints/app-service-settings.png)
- When you test your endpoint with an enterprise application in the [Azure portal](use-scim-to-provision-users-and-groups.md#integrate-your-scim-endpoint-with-the-aad-scim-client), you have two options. You can keep the environment in `Development` and provide the testing token from the `/scim/token` endpoint, or you can change the environment to `Production` and leave the token field empty.
+ When you test your endpoint with an enterprise application in the [Azure portal](use-scim-to-provision-users-and-groups.md#integrate-your-scim-endpoint-with-the-azure-ad-scim-client), you have two options. You can keep the environment in `Development` and provide the testing token from the `/scim/token` endpoint, or you can change the environment to `Production` and leave the token field empty.
That's it! Your SCIM endpoint is now published, and you can use the Azure App Service URL to test the SCIM endpoint.
active-directory How To Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-configure.md
To configure provisioning, follow these steps.
![Edit configuration](media/how-to-configure/con-1.png)
- 7. Enter a **Notification email**. This email will be notified when provisioning isn't healthy. It is recommended that you keep **Prevent accidental deletion** enabled and set the **Accidental deletion threshold** to a number that you wish to be notified about. For more information see [accidental deletes](#accidental-deletions) below.
+ 7. Enter a **Notification email**. This email will be notified when provisioning isn't healthy. It is recommended that you keep **Prevent accidental deletion** enabled and set the **Accidental deletion threshold** to a number that you wish to be notified about. For more information, see [accidental deletes](#accidental-deletions) below.
8. Move the selector to Enable, and select Save. ## Scope provisioning to specific users and groups You can scope the agent to synchronize specific users and groups by using on-premises Active Directory groups or organizational units. You can't configure groups and organizational units within a configuration.
+ >[!NOTE]
+ > You cannot use nested groups with group scoping. Nested objects beyond the first level will not be included when scoping using security groups. Only use group scope filtering for pilot scenarios as there are limitations to syncing large groups.
+
1. In the Azure portal, select **Azure Active Directory**. 2. Select **Azure AD Connect**.
You can scope the agent to synchronize specific users and groups by using on-pre
7. Once you have changed the scope, you should [restart provisioning](#restart-provisioning) to initiate an immediate synchronization of the changes. ## Attribute mapping
-Azure AD Connect cloud sync allows you to easily map attributes between your on-premises user/group objects and the objects in Azure AD. You can customize the default attribute-mappings according to your business needs. So, you can change or delete existing attribute-mappings, or create new attribute-mappings. For more information see [attribute mapping](how-to-attribute-mapping.md).
+Azure AD Connect cloud sync allows you to easily map attributes between your on-premises user/group objects and the objects in Azure AD. You can customize the default attribute-mappings according to your business needs. So, you can change or delete existing attribute-mappings, or create new attribute-mappings. For more information, see [attribute mapping](how-to-attribute-mapping.md).
## On-demand provisioning
-Azure AD Connect cloud sync allows you to test configuration changes, by applying these changes to a single user or group. You can use this to validate and verify that the changes made to the configuration were applied properly and are being correctly synchronized to Azure AD. For more information see [on-demand provisioning](how-to-on-demand-provision.md).
+Azure AD Connect cloud sync allows you to test configuration changes, by applying these changes to a single user or group. You can use this to validate and verify that the changes made to the configuration were applied properly and are being correctly synchronized to Azure AD. For more information, see [on-demand provisioning](how-to-on-demand-provision.md).
## Accidental deletions The accidental delete feature is designed to protect you from accidental configuration changes and changes to your on-premises directory that would affect many users and groups. This feature allows you to: - configure the ability to prevent accidental deletes automatically. - Set the # of objects (threshold) beyond which the configuration will take effect -- setup a notification email address so they can get an email notification once the sync job in question is put in quarantine for this scenario
+- set up a notification email address so they can get an email notification once the sync job in question is put in quarantine for this scenario
-For more information see [Accidental deletes](how-to-accidental-deletes.md)
+For more information, see [Accidental deletes](how-to-accidental-deletes.md)
## Quarantines
-Cloud sync monitors the health of your configuration and places unhealthy objects in a quarantine state. If most or all of the calls made against the target system consistently fail because of an error, for example, invalid admin credentials, the sync job is marked as in quarantine. For more information see the troubleshooting section on [quarantines](how-to-troubleshoot.md#provisioning-quarantined-problems).
+Cloud sync monitors the health of your configuration and places unhealthy objects in a quarantine state. If most or all of the calls made against the target system consistently fail because of an error, for example, invalid admin credentials, the sync job is marked as in quarantine. For more information, see the troubleshooting section on [quarantines](how-to-troubleshoot.md#provisioning-quarantined-problems).
## Restart provisioning If you don't want to wait for the next scheduled run, trigger the provisioning run by using the **Restart provisioning** button.
active-directory Application Consent Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-consent-experience.md
Previously updated : 04/06/2021 Last updated : 04/18/2022
The following diagram and table provide information about the building blocks of
| 1 | User identifier | This identifier represents the user that the client application is requesting to access protected resources on behalf of. | | 2 | Title | The title changes based on whether the users are going through the user or admin consent flow. In user consent flow, the title will be ΓÇ£Permissions requestedΓÇ¥ while in the admin consent flow the title will have an additional line ΓÇ£Accept for your organizationΓÇ¥. | | 3 | App logo | This image should help users have a visual cue of whether this app is the app they intended to access. This image is provided by application developers and the ownership of this image isn't validated. |
-| 4 | App name | This value should inform users which application is requesting access to their data. Note this name is provided by the developers and the ownership of this app name isn't validated. |
-| 5 | Publisher domain | This value should provide users with a domain they may be able to evaluate for trustworthiness. This domain is provided by the developers and the ownership of this publisher domain is validated. |
-| 6 | Publisher verified | The blue "verified" badge means that the app publisher has verified their identity using a Microsoft Partner Network account and has completed the verification process.|
-| 7 | Publisher information | Displays whether the application is published by Microsoft or your organization. |
+| 4 | App name | This value should inform users which application is requesting access to their data. Note this name is provided by the developers and the ownership of this app name isn't validated.|
+| 5 | Publisher name and verification | The blue "verified" badge means that the app publisher has verified their identity using a Microsoft Partner Network account and has completed the verification process. If the app is publisher verified, the publisher name is displayed. If the app is not publisher verified, "Unverified" is displayed instead of a publisher name. For more information, read about [Publisher Verification](publisher-verification-overview.md). Selecting the publisher name displays more app info as available, such as the publisher name, publisher domain, date created, certification details, and reply URLs. |
+| 6 | Microsoft 365 Certification | The Microsoft 365 Certification logo means that an app has been vetted against controls derived from leading industry standard frameworks, and that strong security and compliance practices are in place to protect customer data. For more information, read about [Microsoft 365 Certification](/microsoft-365-app-certification/docs/enterprise-app-certification-guide).|
+| 7 | Publisher information | Displays whether the application is published by Microsoft. |
| 8 | Permissions | This list contains the permissions being requested by the client application. Users should always evaluate the types of permissions being requested to understand what data the client application will be authorized to access on their behalf if they accept. As an application developer it is best to request access, to the permissions with the least privilege. | | 9 | Permission description | This value is provided by the service exposing the permissions. To see the permission descriptions, you must toggle the chevron next to the permission. |
-| 10| App terms | These terms contain links to the terms of service and privacy statement of the application. The publisher is responsible for outlining their rules in their terms of service. Additionally, the publisher is responsible for disclosing the way they use and share user data in their privacy statement. If the publisher doesn't provide links to these values for multi-tenant applications, there will be a bolded warning on the consent prompt. |
-| 11 | https://myapps.microsoft.com | This is the link where users can review and remove any non-Microsoft applications that currently have access to their data. |
-| 12 | Report it here | This link is used to report a suspicious app if you don't trust the app, if you believe the app is impersonating another app, if you believe the app will misuse your data, or for some other reason. |
+| 10 | https://myapps.microsoft.com | This is the link where users can review and remove any non-Microsoft applications that currently have access to their data. |
+| 11 | Report it here | This link is used to report a suspicious app if you don't trust the app, if you believe the app is impersonating another app, if you believe the app will misuse your data, or for some other reason. |
## App requires a permission within the user's scope of authority
active-directory Groups Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-features.md
na Previously updated : 10/07/2021 Last updated : 04/18/2022
In Privileged Identity Management (PIM), you can now assign eligibility for membership or ownership of privileged access groups. Starting with this preview, you can assign Azure Active Directory (Azure AD) built-in roles to cloud groups and use PIM to manage group member and owner eligibility and activation. For more information about role-assignable groups in Azure AD, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md).
->[!Important]
-> To assign a privileged access group to a role for administrative access to Exchange, Security & Compliance Center, or SharePoint, use the Azure AD portal **Roles and Administrators** experience and not in the Privileged Access Groups experience to make the user or group eligible for activation into the group.
+> [!Important]
+> To provide a group of users with just-in-time access to roles with permissions in SharePoint, Exchange, or Security & Compliance Center, be sure to make permanent assignments of users to the group, and then assign the group to a role as eligible for activation. If instead you assign a role permanently to a group and and assign users to be eligible to group membership, it might take significant time to have all permissions of the role activated and ready to use.
> [!NOTE] > For privileged access groups that are used to elevate into Azure AD roles, we recommend that you require an approval process for eligible member assignments. Assignments that can be activated without approval might create a security risk from administrators who have a lower level of permissions. For example, the Helpdesk Administrator has permissions to reset an eligible user's password. ## Require different policies for each role assignable group
-Some organizations use tools like Azure AD business-to-business (B2B) collaboration to invite their partners as guests to their Azure AD organization. Instead of a single just-in-time policy for all assignments to a privileged role, you can create two different privileged access groups with their own policies. You can enforce less strict requirements for your trusted employees, and stricter requirements like approval workflow for your partners when they request activation into their assigned group.
+Some organizations use tools like Azure AD business-to-business (B2B) collaboration to invite their partners as guests to their Azure AD organization. Instead of a single just-in-time policy for all assignments to a privileged role, you can create two different privileged access groups with their own policies. You can enforce less strict requirements for your trusted employees, and stricter requirements like approval workflow for your partners when they request activation into their assigned role.
## Activate multiple role assignments in a single request
-With the privileged access groups preview, you can give workload-specific administrators quick access to multiple roles with a single just-in-time request. For example, your Tier 0 Office Admins might need just-in-time access to the Exchange Admin, Office Apps Admin, Teams Admin, and Search Admin roles to thoroughly investigate incidents daily. Before today it would require four consecutive requests, which are a process that takes some time. Instead, you can create a role assignable group called ΓÇ£Tier 0 Office AdminsΓÇ¥, assign it to each of the four roles previously mentioned (or any Azure AD built-in roles) and enable it for Privileged Access in the groupΓÇÖs Activity section. Once enabled for privileged access, you can configure the just-in-time settings for members of the group and assign your admins and owners as eligible. When the admins elevate into the group, theyΓÇÖll become members of all four Azure AD roles.
+With the privileged access groups preview, you can give workload-specific administrators quick access to multiple roles with a single just-in-time request. For example, your Tier 0 Office Admins might need just-in-time access to the Exchange Admin, Office Apps Admin, Teams Admin, and Search Admin roles to thoroughly investigate incidents daily. You can create a role-assignable group called ΓÇ£Tier 0 Office AdminsΓÇ¥, and make it eligible for assignment to the four roles previously mentioned (or any Azure AD built-in roles) and enable it for Privileged Access in the groupΓÇÖs Activity section. Once enabled for privileged access, you can assign your admins and owners to the group. When the admins elevate the group into the roles, your staff will have permissions from all four Azure AD roles.
## Extend and renew group assignments
active-directory Pim Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-apis.md
Previously updated : 10/07/2021 Last updated : 04/18/2022
# Understand the Privileged Identity Management APIs
-You can perform Privileged Identity Management (PIM) tasks using the Microsoft Graph APIs for Azure Active Directory (Azure AD) roles and the Azure Resource Manager API for Azure resource roles (sometimes called Azure RBAC roles). This article describes important concepts for using the APIs for Privileged Identity Management.
+You can perform Privileged Identity Management (PIM) tasks using the Microsoft Graph APIs for Azure Active Directory (Azure AD) roles and the Azure Resource Manager API for Azure roles. This article describes important concepts for using the APIs for Privileged Identity Management.
For requests and other details about PIM APIs, check out: - [PIM for Azure AD roles API reference](/graph/api/resources/unifiedroleeligibilityschedulerequest?view=graph-rest-beta&preserve-view=true) - [PIM for Azure resource roles API reference](/rest/api/authorization/roleeligibilityschedulerequests)
-> [!IMPORTANT]
-> PIM APIs [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
- ## PIM API history There have been several iterations of the PIM API over the past few years. You'll find some overlaps in functionality, but they don't represent a linear progression of versions.
-### Iteration 1 ΓÇô only supports Azure AD roles, deprecating
+### Iteration 1 ΓÇô Deprecated
-Under the /beta/privilegedRoles endpoint, Microsoft had a classic version of the PIM API which is no longer supported in most tenants. We are in the process of deprecating remaining access to this API on 05/31.
+Under the /beta/privilegedRoles endpoint, Microsoft had a classic version of the PIM API which only supported Azure AD roles and is no longer supported. Access to this API was deprecated in June 2021.
-### Iteration 2 ΓÇô supports Azure AD roles and Azure resource roles
+### Iteration 2 ΓÇô Supports Azure AD roles and Azure resource roles
Under the /beta/privilegedAccess endpoint, Microsoft supported both /aadRoles and /azureResources. This endpoint is still available in your tenant but Microsoft recommends against starting any new development with this API. This beta API will never be released to general availability and will be eventually deprecated.
active-directory Pim How To Configure Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts.md
Previously updated : 06/30/2021 Last updated : 04/18/2022
Severity: **Low**
Severity: **Medium** + | | Description | | | |
-| **Why do I get this alert?** | Accounts in a privileged role have not changed their password in the past 90 days. These accounts might be service or shared accounts that aren't being maintained and are vulnerable to attackers. |
+| **Why do I get this alert?** | This alert is no longer triggered based on the last password change date of for an account. This alert is for accounts in a privileged role that haven't signed in during the past *n* days, where *n* is a number of days that is configurable between 1-365 days . These accounts might be service or shared accounts that aren't being maintained and are vulnerable to attackers. |
| **How to fix?** | Review the accounts in the list. If they no longer need access, remove them from their privileged roles. | | **Prevention** | Ensure that accounts that are shared are rotating strong passwords when there is a change in the users that know the password. </br>Regularly review accounts with privileged roles using [access reviews](./pim-create-azure-ad-roles-and-resource-roles-review.md) and remove role assignments that are no longer needed. | | **In-portal mitigation action** | Removes the account from their privileged role. |
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Use the following table to better understand how to resolve errors that you find
|ImportSkipped | When each user is evaluated, the system tries to import the user from the source system. This error commonly occurs when the user who's being imported is missing the matching property defined in your attribute mappings. Without a value present on the user object for the matching attribute, the system can't evaluate scoping, matching, or export changes. Note that the presence of this error does not indicate that the user is in scope, because you haven't yet evaluated scoping for the user.| |EntrySynchronizationSkipped | The provisioning service has successfully queried the source system and identified the user. No further action was taken on the user and they were skipped. The user might have been out of scope, or the user might have already existed in the target system with no further changes required.| |SystemForCrossDomainIdentityManagementMultipleEntriesInResponse| A GET request to retrieve a user or group received multiple users or groups in the response. The system expects to receive only one user or group in the response. [For example](../app-provisioning/use-scim-to-provision-users-and-groups.md#get-group), if you do a GET request to retrieve a group and provide a filter to exclude members, and your System for Cross-Domain Identity Management (SCIM) endpoint returns the members, you'll get this error.|
-|SystemForCrossDomainIdentityManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Please work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](../app-provisioning/use-scim-to-provision-users-and-groups.md#understand-the-aad-scim-implementation).|
+|SystemForCrossDomainIdentityManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Please work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](../app-provisioning/use-scim-to-provision-users-and-groups.md#understand-the-azure-ad-scim-implementation).|
|SchemaPropertyCanOnlyAcceptValue|The property in the target system can only accept one value, but the property in the source system has multiple. Please ensure that you map a single valued attribute to the propoerty that is throwing an error, update the value in the source to be single valued, or remove the attribute from the mappings.| ## Next steps
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
Learn more about [Managed Disk Snapshot - ManagedDiskSnapshot (Use Standard Stor
We've analyzed the usage patterns of your virtual machine over the past 7 days and identified virtual machines with low usage. While certain scenarios can result in low utilization by design, you can often save money by managing the size and number of virtual machines.
-Learn more about [Virtual machine - LowUsageVmV2 (Right-size or shutdown underutilized virtual machines)](./advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances).
+Learn more about [Virtual machine - LowUsageVmV2 (Right-size or shutdown underutilized virtual machines)](/azure/advisor/advisor-cost-recommendations#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances).
### You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk.
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
Learn more about [Batch account - OldPool (Recreate your pool to get the latest
Your pool is using a deprecated internal component. Please delete and recreate your pool for improved stability and performance.
-Learn more about [Batch account - RecreatePool (Delete and recreate your pool to remove a deprecated internal component)](../batch/best-practices.md#pool-lifetime-and-billing).
+Learn more about [Batch account - RecreatePool (Delete and recreate your pool to remove a deprecated internal component)](/azure/batch/best-practices#pool-lifetime-and-billing)
### Upgrade to the latest API version to ensure your Batch account remains operational.
Learn more about [Batch account - RemoveA8_A11Pools (Delete and recreate your po
Your pool is using an image with an imminent expiration date. Please recreate the pool with a new image to avoid potential interruptions. A list of newer images is available via the ListSupportedImages API.
-Learn more about [Batch account - EolImage (Recreate your pool with a new image)](../batch/batch-pool-vm-sizes.md#supported-vm-images).
+Learn more about [Batch account - EolImage (Recreate your pool with a new image)](/azure/batch/batch-pool-vm-sizes#supported-vm-images).
## Cognitive Service
Learn more about [Kubernetes service - UpdateServicePrincipal (Update cluster's
Monitoring addon workspace is deleted. Correct issues to setup monitoring addon.
-Learn more about [Kubernetes service - MonitoringAddonWorkspaceIsDeleted (Monitoring addon workspace is deleted)](../azure-monitor/containers/container-insights-optout.md#azure-cli).
+Learn more about [Kubernetes service - MonitoringAddonWorkspaceIsDeleted (Monitoring addon workspace is deleted)](/azure/azure-monitor/containers/container-insights-optout#azure-cli).
### Deprecated Kubernetes API in 1.16 is found
Learn more about [Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve
We have detected that one or more of your alert rules have invalid queries specified in their condition section. Log alert rules are created in Azure Monitor and are used to run analytics queries at specified intervals. The results of the query determine if an alert needs to be triggered. Analytics queries may become invalid overtime due to changes in referenced resources, tables, or commands. We recommend that you correct the query in the alert rule to prevent it from getting auto-disabled and ensure monitoring coverage of your resources in Azure.
-Learn more about [Alert Rule - ScheduledQueryRulesLogAlert (Repair your log alert rule)](../azure-monitor/alerts/alerts-troubleshoot-log.md#query-used-in-a-log-alert-isnt-valid).
+Learn more about [Alert Rule - ScheduledQueryRulesLogAlert (Repair your log alert rule)](/azure/azure-monitor/alerts/alerts-troubleshoot-log#query-used-in-a-log-alert-is-not-valid).
### Log alert rule was disabled The alert rule was disabled by Azure Monitor as it was causing service issues. To enable the alert rule, contact support.
-Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disabled)](../azure-monitor/alerts/alerts-troubleshoot-log.md#query-used-in-a-log-alert-isnt-valid).
+Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disabled)](/azure/azure-monitor/alerts/alerts-troubleshoot-log#query-used-in-a-log-alert-is-not-valid).
## Key Vault
Learn more about [SQL virtual machine - UpgradeToFullMode (SQL IaaS Agent should
A region can support a maximum of 250 storage accounts per subscription. You have either already reached or are about to reach that limit. If you reach that limit, you will be unable to create any more storage accounts in that subscription/region combination. Please evaluate the recommended action below to avoid hitting the limit.
-Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](../storage/blobs/storage-performance-checklist.md).
+Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](/azure/storage/blobs/storage-performance-checklist#what-to-do-when-approaching-a-scalability-target).
### Update to newer releases of the Storage Java v12 SDK for better reliability.
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
Learn more about [AVS Private cloud - vSANCapacity (vSAN capacity utilization ha
Cache instances perform best when not running under high network bandwidth which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce network bandwidth or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](../azure-cache-for-redis/cache-troubleshoot-server.md#server-side-bandwidth-limitation).
+Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](/azure/azure-cache-for-redis/cache-troubleshoot-server#server-side-bandwidth-limitation).
### Improve your Cache and application performance when running with many connected clients
Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your
Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](../azure-cache-for-redis/cache-troubleshoot-client.md#high-client-cpu-usage).
+Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](/azure/azure-cache-for-redis/cache-troubleshoot-client#high-client-cpu-usage).
### Improve your Cache and application performance when running with high memory pressure Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
-Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](../azure-cache-for-redis/cache-troubleshoot-client.md#memory-pressure-on-redis-client).
+Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](/azure/azure-cache-for-redis/cache-troubleshoot-client#memory-pressure-on-redis-client).
## Cognitive Service
Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables
Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 20 seconds to route traffic to a health endpoint as quickly as possible.
-Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](../traffic-manager/traffic-manager-monitoring.md#endpoint-failover-and-recovery).
+Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](/azure/traffic-manager/traffic-manager-monitoring#endpoint-failover-and-recovery).
### Configure DNS Time to Live to 60 seconds
Learn more about [SQL data warehouse - CreateTableStatisticsSqlDW (Create statis
We have detected distribution data skew greater than 15%. This can cause costly performance bottlenecks.
-Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute.md#how-to-tell-if-your-distribution-column-is-a-good-choice).
+Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute#how-to-tell-if-your-distribution-column-is-a-good-choice).
### Update statistics on table columns
Learn more about [SQL data warehouse - SqlDwIncreaseCacheCapacity (Scale up to o
We have detected that you had high tempdb utilization which can impact the performance of your workload.
-Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor.md#monitor-tempdb).
+Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor#monitor-tempdb).
### Convert tables to replicated tables with SQL Data Warehouse
Learn more about [SQL data warehouse - SqlDwReplicateTable (Convert tables to re
We have detected that you can increase load throughput by splitting your compressed files that are staged in your storage account. A good rule of thumb is to split compressed files into 60 or more to maximize the parallelism of your load.
-Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](../synapse-analytics/sql/data-loading-best-practices.md#prepare-data-in-azure-storage).
+Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](/azure/synapse-analytics/sql/data-loading-best-practices#preparing-data-in-azure-storage).
### Increase batch size when loading to maximize load throughput, data compression, and query performance We have detected that you can increase load performance and throughput by increasing the batch size when loading into your database. You should consider using the COPY statement. If you are unable to use the COPY statement, consider increasing the batch size when using loading utilities such as the SQLBulkCopy API or BCP - a good rule of thumb is a batch size between 100K to 1M rows.
-Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](../synapse-analytics/sql/data-loading-best-practices.md#increase-batch-size-when-using-sqlbulkcopy-api-or-bcp).
+Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](/azure/synapse-analytics/sql/data-loading-best-practices#increase-batch-size-when-using-sqlbulkcopy-api-or-bcp).
### Co-locate the storage account within the same region to minimize latency when loading We have detected that you are loading from a region that is different from your SQL pool. You should consider loading from a storage account that is within the same region as your SQL pool to minimize latency when loading data.
-Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](../synapse-analytics/sql/data-loading-best-practices.md#prepare-data-in-azure-storage).
+Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](/azure/synapse-analytics/sql/data-loading-best-practices#preparing-data-in-azure-storage).
## Storage
Learn more about [App service - AppServiceMoveToPremiumV2 (Move your App Service
Your app has opened too many TCP/IP socket connections. Exceeding ephemeral TCP/IP port connection limits can cause unexpected connectivity issues for your apps.
-Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](../app-service/app-service-best-practices.md#socketresources).
+Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](/azure/app-service/app-service-best-practices#socketresources).
## Next steps
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiatio
Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased via maxfragmentationmemory-reserved setting available in advanced settings blade.
-Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](../azure-cache-for-redis/cache-configure.md#memory-policies).
+Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](/azure/azure-cache-for-redis/cache-configure#memory-policies).
## Compute
Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on yo
We have identified that you are using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
-Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](../virtual-machines/disks-types.md#premium-ssds).
+Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](/azure/virtual-machines/disks-types#premium-ssd).
### Enable virtual machine replication to protect your applications from regional outage
Learn more about [Virtual machine - UpgradeVMToManagedDisksWithoutAdditionalCost
Using IP Address based filtering has been identified as a vulnerable way to control outbound connectivity for firewalls. It is advised to use Service Tags as an alternative for controlling connectivity. We highly recommend the use of Service Tags, to allow connectivity to Azure Site Recovery services for the machines.
-Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServiceTags (Update your outbound connectivity protocol to Service Tags for Azure Site Recovery)](../site-recovery/azure-to-azure-about-networking.md#outbound-connectivity-using-service-tags).
+Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServiceTags (Update your outbound connectivity protocol to Service Tags for Azure Site Recovery)](/azure/site-recovery/azure-to-azure-about-networking#outbound-connectivity-using-service-tags).
### Use Managed Disks to improve data reliability
Learn more about [Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a se
We observed your account is throwing a TooManyRequests error with the 16500 error code. Enabling Server Side Retry (SSR) can help mitigate this issue for you.
-Learn more about [Cosmos DB account - CosmosDBMongoServerSideRetries (Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account)](../cosmos-db/cassandr).
+Learn more about [Cosmos DB account - CosmosDBMongoServerSideRetries (Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account)](/azure/cosmos-db/cassandra/prevent-rate-limiting-errors).
### Migrate your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features
Learn more about [Application gateway - AppGateway (Upgrade your SKU or add more
The VPN gateway Basic SKU is designed for development or testing scenarios. Please move to a production SKU if you are using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability.
-Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsku).
+Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](/azure/vpn-gateway/vpn-gateway-about-vpn-gateway-settings#gwsku).
### Add at least one more endpoint to the profile, preferably in another Azure region
Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Imple
Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST API's) in general are less sensitive to this. Please make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the *.azurewebsites.net host name towards the backend.
-Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](../application-gateway/troubleshoot-app-service-redirection-app-service-url.md).
+Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](/azure/application-gateway/troubleshoot-app-service-redirection-app-service-url#alternate-solution-use-a-custom-domain-name).
### Use ExpressRoute Global Reach to improve your design for disaster recovery
Learn more about [Search service - StandardServiceStorageQuota90percent (You are
After enabling Soft Delete, deleted data transitions to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires.
-Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to protect your blob data)](../storage/blobs/soft-delete-blob-overview.md).
+Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to protect your blob data)](https://aka.ms/softdelete).
### Use Managed Disks for storage accounts reaching capacity limit We have identified that you are using Premium SSD Unmanaged Disks in Storage account(s) that are about to reach Premium Storage capacity limit. To avoid failures when the limit is reached, we recommend migrating to Managed Disks that do not have account capacity limit. This migration can be done through the portal in less than 5 minutes.
-Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](../storage/common/scalability-targets-standard-account.md).
+Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](/azure/storage/common/scalability-targets-standard-account#premium-performance-page-blob-storage).
## Web
Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Di
Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this you could scale out your app.
-Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](../app-service/app-service-best-practices.md#CPUresources).
+Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](/azure/app-service/app-service-best-practices#CPUresources).
### Fix the backup database settings of your App Service resource Your app's backups are consistently failing due to invalid DB configuration, you can find more details in backup history.
-Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](../app-service/app-service-best-practices.md#appbackup).
+Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](/azure/app-service/app-service-best-practices#appbackup.).
### Consider scaling up your App Service Plan SKU to avoid memory exhaustion The App Service Plan containing your app reached >85% memory allocated. High memory consumption can lead to runtime issues with your apps. Investigate which app in the App Service Plan is exhausting memory and scale up to a higher plan with more memory resources if needed.
-Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](../app-service/app-service-best-practices.md#memoryresources).
+Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](/azure/app-service/app-service-best-practices#memoryresources).
### Scale up your App Service resource to remove the quota limit
Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slo
Your app's backups are consistently failing due to invalid storage settings, you can find more details in backup history.
-Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](../app-service/app-service-best-practices.md#appbackup).
+Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](/azure/app-service/app-service-best-practices#appbackup).
### Move your App Service resource to Standard or higher and use deployment slots
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
metadata:
name: pv-azuredisk spec: capacity:
- storage: 100Gi
+ storage: 20Gi
accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain
spec:
- ReadWriteOnce resources: requests:
- storage: 100Gi
+ storage: 20Gi
volumeName: pv-azuredisk storageClassName: managed-csi ```
Verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolum
$ kubectl get pvc pvc-azuredisk NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-pvc-azuredisk Bound pv-azuredisk 100Gi RWO 5s
+pvc-azuredisk Bound pv-azuredisk 20Gi RWO 5s
``` Create a *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. For example:
aks Http Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-proxy.md
Some more complex solutions may require creating a chain of trust to establish s
## Limitations and other details The following scenarios are **not** supported:-- Monitoring addon - Different proxy configurations per node pool - Updating proxy settings post cluster creation - User/Password authentication
For example, assuming a new file has been created with the base64 encoded string
az aks update -n $clusterName -g $resourceGroup --http-proxy-config aks-proxy-config-2.json ```
+## Monitoring Addon Configurations
+
+Below list the supported and not supported configuration for the monitoring addon.
+
+Supported configuration(s)
+
+ - Outbound proxy without authentication
+ - Outbound proxy with username & password authentication
+ - Outbound proxy with trusted cert for Log Analytics endpoint
+
+Not supported configuration(s)
+
+ - Custom Metrics and Recommended alerts feature are not supported in Proxy with trusted cert
+ - Outbound proxy support with Azure Monitor Private Link Scope (AMPLS)
+ ## Next steps - For more on the network requirements of AKS clusters, see [control egress traffic for cluster nodes in AKS][aks-egress].
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
In this scenario, if the SSL certificate that's used by the Management endpoint
## Configuration backup
-Configure a local storage volume for the self-hosted gateway container, so it can persist a backup copy of the latest downloaded configuration. If connectivity is down, the storage volume can use the backup copy upon restart. The volume mount path must be <code>/apim/config</code>. See an example on [GitHub](https://github.com/Azure/api-management-self-hosted-gateway/blob/master/examples/self-hosted-gateway-with-configuration-backup.yaml).
+Configure a local storage volume for the self-hosted gateway container, so it can persist a backup copy of the latest downloaded configuration. If connectivity is down, the storage volume can use the backup copy upon restart. The volume mount path must be `/apim/config` and must be owned by group ID `1001`. See an example on [GitHub](https://github.com/Azure/api-management-self-hosted-gateway/blob/master/examples/self-hosted-gateway-with-configuration-backup.yaml).
To learn about storage in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/storage/volumes/).
+To change ownership for a mounted path, see the `securityContext.fsGroup` setting on the [Kubernetes website](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod).
> [!NOTE] > To learn about self-hosted gateway behavior in the presence of a temporary Azure connectivity outage, see [Self-hosted gateway overview](self-hosted-gateway-overview.md#connectivity-to-azure).
application-gateway Configuration Http Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-http-settings.md
Azure Application Gateway uses gateway-managed cookies for maintaining user sess
This feature is useful when you want to keep a user session on the same server and when session state is saved locally on the server for a user session. If the application can't handle cookie-based affinity, you can't use this feature. To use it, make sure that the clients support cookies. > [!NOTE]
-> Some vulnerability scans may flag the Applicaton Gateway affinity cookie because the Secure or HttpOnly flags are not set. These scans do not take into account that the data in the cookie is generated using a one-way hash. The cookie does not contain any user information and is used purely for routing.
+> Some vulnerability scans may flag the Application Gateway affinity cookie because the Secure or HttpOnly flags are not set. These scans do not take into account that the data in the cookie is generated using a one-way hash. The cookie does not contain any user information and is used purely for routing.
The [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chromiumdash.appspot.com/schedule) brought a mandate where HTTP cookies without [SameSite](https://tools.ietf.org/id/draft-ietf-httpbis-rfc6265bis-03.html#rfc.section.5.3.7) attribute have to be treated as SameSite=Lax. In the case of CORS (Cross-Origin Resource Sharing) requests, if the cookie has to be sent in a third-party context, it has to use *SameSite=None; Secure* attributes and it should be sent over HTTPS only. Otherwise, in an HTTP only scenario, the browser doesn't send the cookies in the third-party context. The goal of this update from Chrome is to enhance security and to avoid Cross-Site Request Forgery (CSRF) attacks.
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/multiple-site-overview.md
This ordering can be established by providing a 'Priority' field value to the re
The priority field only impacts the order of evaluation of a request routing rule, this wont change the order of evaluation of path based rules within a `PathBasedRouting` request routing rule.
->[!NOTE]
->This feature is currently available only through [Azure PowerShell](tutorial-multiple-sites-powershell.md#add-priority-to-routing-rules) and [Azure CLI](tutorial-multiple-sites-cli.md#add-priority-to-routing-rules). Portal support is coming soon.
- >[!NOTE] >If you wish to use rule priority, you will have to specify rule-priority field values for all the existing request routing rules. Once the rule priority field is in use, any new routing rule that is created would also need to have a rule priority field value as part of its config.
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
An Azure PowerShell script is available in the PowerShell gallery to help you mi
Depending on your requirements and environment, you can create a test Application Gateway using either the Azure portal, Azure PowerShell, or Azure CLI. - [Tutorial: Create an application gateway that improves web application access](tutorial-autoscale-ps.md)
+- [Learn module: Introduction to Azure Application Gateway](/learn/modules/intro-to-azure-application-gateway)
application-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview.md
Depending on your requirements and environment, you can create a test Applicatio
- [Quickstart: Direct web traffic with Azure Application Gateway - Azure portal](quick-create-portal.md) - [Quickstart: Direct web traffic with Azure Application Gateway - Azure PowerShell](quick-create-powershell.md)-- [Quickstart: Direct web traffic with Azure Application Gateway - Azure CLI](quick-create-cli.md)
+- [Quickstart: Direct web traffic with Azure Application Gateway - Azure CLI](quick-create-cli.md)
+- [Learn module: Introduction to Azure Application Gateway](/learn/modules/intro-to-azure-application-gateway)
application-gateway Redirect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-overview.md
A redirect type sets the response status code for the clients to understand the
Redirects from one listener to another listener. Listener redirection is commonly used to enable HTTP to HTTPS redirection.
+ When configuring redirects with a multi-site target listener, it is required that all the host names (with or without wildcard characters) are defined as part of the source listener are also part of the destination listener. This ensures that no traffic is dropped due to missing host names on the destination listener while configuring HTTP to HTTPS redirection.
+
- **Path-based redirection** This type of redirection enables redirection only on a specific site area, for example, redirecting HTTP to HTTPS requests for a shopping cart area denoted by /cart/\*.
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md
In the case of a URL redirect, Application Gateway sends a redirect response to
- Rewrites are not supported when the application gateway is configured to redirect the requests or to show a custom error page. - Header names can contain any alphanumeric characters and specific symbols as defined in [RFC 7230](https://tools.ietf.org/html/rfc7230#page-27). We don't currently support the underscore (\_) special character in Header names. - Connection and upgrade headers cannot be rewritten
+- Rewrites are not supported for 4xx and 5xx responses generated directly from Application Gateway
## Next steps
attestation Azure Diagnostic Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/azure-diagnostic-monitoring.md
-# Set up diagnostics with a Trusted Platform Module (TPM) endpoint of Azure Attestation
+# Set up diagnostics with Microsoft Azure Attestation
This article helps you create and configure diagnostic settings to send platform metrics and platform logs to different destinations. [Platform logs](../azure-monitor/essentials/platform-logs-overview.md) in Azure, including the Azure Activity log and resource logs, provide detailed diagnostic and auditing information for Azure resources and the Azure platform that they depend on. [Platform metrics](../azure-monitor/essentials/data-platform-metrics.md) are collected by default and are stored in the Azure Monitor Metrics database. Before you begin, make sure you've [set up Azure Attestation with Azure PowerShell](quickstart-powershell.md).
-The Trusted Platform Module (TPM) endpoint service is enabled in the diagnostic settings and can be used to monitor activity. Set up [Azure Monitoring](../azure-monitor/overview.md) for the TPM service endpoint by using the following code.
+Azure Attestation is enabled in the diagnostic settings and can be used to monitor activity. Set up [Azure Monitoring](../azure-monitor/overview.md) for the service endpoint by using the following code.
```powershell
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
Last updated 03/15/2022
Azure Cache for Redis provides an in-memory data store based on the [Redis](https://redis.io/) software. Redis improves the performance and scalability of an application that uses backend data stores heavily. It's able to process large volumes of application requests by keeping frequently accessed data in the server memory, which can be written to and read from quickly. Redis brings a critical low-latency and high-throughput data storage solution to modern applications.
-Azure Cache for Redis offers both the Redis open-source (OSS Redis) and a commercial product from Redis Labs (Redis Enterprise) as a managed service. It provides secure and dedicated Redis server instances and full Redis API compatibility. The service is operated by Microsoft, hosted on Azure, and usable by any application within or outside of Azure.
+Azure Cache for Redis offers both the Redis open-source (OSS Redis) and a commercial product from Redis Inc. (Redis Enterprise) as a managed service. It provides secure and dedicated Redis server instances and full Redis API compatibility. The service is operated by Microsoft, hosted on Azure, and usable by any application within or outside of Azure.
Azure Cache for Redis can be used as a distributed data or content cache, a session store, a message broker, and more. It can be deployed as a standalone. Or, it can be deployed along with other Azure database services, such as Azure SQL or Cosmos DB.
Azure Cache for Redis is available in these tiers:
| Basic | An OSS Redis cache running on a single VM. This tier has no service-level agreement (SLA) and is ideal for development/test and non-critical workloads. | | Standard | An OSS Redis cache running on two VMs in a replicated configuration. | | Premium | High-performance OSS Redis caches. This tier offers higher throughput, lower latency, better availability, and more features. Premium caches are deployed on more powerful VMs compared to the VMs for Basic or Standard caches. |
-| Enterprise | High-performance caches powered by Redis Labs' Redis Enterprise software. This tier supports Redis modules including RediSearch, RedisBloom, and RedisTimeSeries. Also, it offers even higher availability than the Premium tier. |
-| Enterprise Flash | Cost-effective large caches powered by Redis Labs' Redis Enterprise software. This tier extends Redis data storage to non-volatile memory, which is cheaper than DRAM, on a VM. It reduces the overall per-GB memory cost. |
+| Enterprise | High-performance caches powered by Redis Inc.'s Redis Enterprise software. This tier supports Redis modules including RediSearch, RedisBloom, and RedisTimeSeries. Also, it offers even higher availability than the Premium tier. |
+| Enterprise Flash | Cost-effective large caches powered by Redis Inc.'s Redis Enterprise software. This tier extends Redis data storage to non-volatile memory, which is cheaper than DRAM, on a VM. It reduces the overall per-GB memory cost. |
### Feature comparison
Consider the following options when choosing an Azure Cache for Redis tier:
- **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss. - **Data persistence**: The Premium and Enterprise tiers allow you to persist the cache data to an Azure Storage account and a Managed Disk respectively. Underlying infrastructure issues might result in potential data loss. We recommend using the Redis data persistence feature in these tiers to increase resiliency against data loss. Azure Cache for Redis offers both RDB and AOF (preview) options. Data persistence can be enabled through Azure portal and CLI. For the Premium tier, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md). - **Network isolation**: Azure Private Link and Virtual Network (VNET) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNET allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md).-- **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redislabs.com/latest/modules/redisearch/), [RedisBloom](https://docs.redislabs.com/latest/modules/redisbloom/) and [RedisTimeSeries](https://docs.redislabs.com/latest/modules/redistimeseries/). These modules add new data types and functionality to Redis.
+- **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redis.com/latest/modules/redisearch/), [RedisBloom](https://docs.redis.com/latest/modules/redisbloom/) and [RedisTimeSeries](https://docs.redis.com/latest/modules/redistimeseries/). These modules add new data types and functionality to Redis.
You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier isn't supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to automate a scaling operation](cache-how-to-scale.md#how-to-automate-a-scaling-operation). ### Special considerations for Enterprise tiers
-The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis from Redis Labs. Customers obtain and pay for a license to this software through an Azure Marketplace offer. Azure Cache for Redis manages the license acquisition so that you won't have to do it separately. To purchase in the Azure Marketplace, you must have the following prerequisites:
+The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis from Redis Inc. Customers obtain and pay for a license to this software through an Azure Marketplace offer. Azure Cache for Redis manages the license acquisition so that you won't have to do it separately. To purchase in the Azure Marketplace, you must have the following prerequisites:
- Your Azure subscription has a valid payment instrument. Azure credits or free MSDN subscriptions aren't supported. - Your organization allows [Azure Marketplace purchases](../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases).-- If you use a private Marketplace, it must contain the Redis Labs Enterprise offer.
+- If you use a private Marketplace, it must contain the Redis Inc. Enterprise offer.
> [!IMPORTANT] > Azure Cache for Redis Enterprise requires standard network Load Balancers that are charged separately from cache instances themselves. For more information, see [Load Balancer pricing](https://azure.microsoft.com/pricing/details/load-balancer/).
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
The Log Analytics agent for Linux is provided in a self-extracting and installab
> [!NOTE] > For Azure VMs, we recommend you install the agent on them using the [Azure Log Analytics VM extension](../../virtual-machines/extensions/oms-linux.md) for Linux. ++ 1. [Download](https://github.com/microsoft/OMS-Agent-for-Linux#azure-install-guide) and transfer the appropriate bundle (x64 or x86) to your Linux VM or physical computer, using scp/sftp. 2. Install the bundle by using the `--install` argument. To onboard to a Log Analytics workspace during installation, provide the `-w <WorkspaceID>` and `-s <workspaceKey>` parameters copied earlier.
The Log Analytics agent for Linux is provided in a self-extracting and installab
>[!NOTE] >You need to use the `--upgrade` argument if any dependent packages such as omi, scx, omsconfig or their older versions are installed, as would be the case if the system Center Operations Manager agent for Linux is already installed.
- ```
- sudo sh ./omsagent-*.universal.x64.sh --install -w <workspace id> -s <shared key>
- ```
+> [!NOTE]
+> Because the [Container Monitoring solution](../containers/containers.md) is being retired, the following documentation uses the optional setting --skip-docker-provider-install to disable the Container Monitoring data collection.
+
+ ```
+ sudo sh ./omsagent-*.universal.x64.sh --install -w <workspace id> -s <shared key> --skip-docker-provider-install
+ ```
3. To configure the Linux agent to install and connect to a Log Analytics workspace through a Log Analytics gateway, run the following command providing the proxy, workspace ID, and workspace key parameters. This configuration can be specified on the command line by including `-p [protocol://][user:password@]proxyhost[:port]`. The *proxyhost* property accepts a fully qualified domain name or IP address of the Log Analytics gateway server.
sudo sh ./omsagent-*.universal.x64.sh --extract
Upgrading from a previous version, starting with version 1.0.0-47, is supported in each release. Perform the installation with the `--upgrade` parameter to upgrade all components of the agent to the latest version.
+> [!NOTE]
+> There will be a warning message during the upgrade "docker provider package installation skipped" since --skip-docker-provider-install flag is set. If you are installing over an existing omsagent install and wish to remove the docker provider, you should first purge the existing installation and then install using the --skip-docker-provider-install flag.
++ ## Cache information Data from the Log Analytics agent for Linux is cached on the local machine at *%STATE_DIR_WS%/out_oms_common*.buffer* before it's sent to Azure Monitor. Custom log data is buffered in *%STATE_DIR_WS%/out_oms_blob*.buffer*. The path may be different for some [solutions and data types](https://github.com/microsoft/OMS-Agent-for-Linux/search?utf8=%E2%9C%93&q=+buffer_path&type=).
azure-monitor Azure Functions Supported Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-functions-supported-features.md
Title: Azure Application Insights - Azure Functions Supported Features
description: Application Insights Supported Features for Azure Functions Last updated 4/23/2019- ms.devlang: csharp
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
As we're adding new integrations, the auto-instrumentation capability matrix bec
Application monitoring on Azure App Service on Windows is available for **[ASP.NET](./azure-web-apps-net.md)** (enabled by default), **[ASP.NET Core](./azure-web-apps-net-core.md)**, **[Java](./azure-web-apps-java.md)** (in public preview), and **[Node.js](./azure-web-apps-nodejs.md)** applications. To monitor a Python app, add the [SDK](./opencensus-python.md) to your code. > [!NOTE]
-> For Windows, application monitoring is currently available for code-based/managed services on App Service. Monitoring for apps on Windows Containers on App Service is not yet supported through the integration with Application Insights.
+> Application monitoring for apps on Windows Containers on App Service [is in public preview for .NET Core, .NET Framework, and Java](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html).
### Linux You can enable monitoring for **[Java](./azure-web-apps-java.md?)**, **[Node.js](./azure-web-apps-nodejs.md?tabs=linux)**, and **[ASP.NET Core](./azure-web-apps-net-core.md?tabs=linux)(Preview)** apps running on Linux in App Service through the portal.
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
Title: Azure Application Insights telemetry correlation | Microsoft Docs
description: Application Insights telemetry correlation Last updated 06/07/2019- ms.devlang: csharp, java, javascript, python
azure-monitor Custom Data Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-data-correlation.md
Title: Azure Application Insights | Microsoft Docs
description: Correlate data from Application Insights to other datasets, such as data enrichment or lookup tables, non-Application Insights data sources, and custom data. Last updated 08/08/2018- # Correlating Application Insights data with custom data sources
azure-monitor Custom Operations Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md
ms.devlang: csharp Last updated 11/26/2019- # Track custom operations with Application Insights .NET SDK
azure-monitor Data Model Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-context.md
Title: Azure Application Insights Telemetry Data Model - Telemetry Context | Mic
description: Application Insights telemetry context data model Last updated 05/15/2017- # Telemetry context: Application Insights data model
azure-monitor Data Model Dependency Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-dependency-telemetry.md
Title: Azure Monitor Application Insights Dependency Data Model
description: Application Insights data model for dependency telemetry Last updated 04/17/2017- # Dependency telemetry: Application Insights data model
azure-monitor Data Model Event Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-event-telemetry.md
Title: Azure Application Insights Telemetry Data Model - Event Telemetry | Micro
description: Application Insights data model for event telemetry Last updated 04/25/2017- # Event telemetry: Application Insights data model
azure-monitor Data Model Exception Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-exception-telemetry.md
Title: Azure Application Insights Exception Telemetry Data model
description: Application Insights data model for exception telemetry Last updated 04/25/2017- # Exception telemetry: Application Insights data model
azure-monitor Data Model Metric Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-metric-telemetry.md
Title: Data model for metric telemetry - Azure Application Insights
description: Application Insights data model for metric telemetry Last updated 04/25/2017- # Metric telemetry: Application Insights data model
azure-monitor Data Model Request Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-request-telemetry.md
Title: Data model for request telemetry - Azure Application Insights
description: Application Insights data model for request telemetry Last updated 01/07/2019- # Request telemetry: Application Insights data model
azure-monitor Data Model Trace Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-trace-telemetry.md
Title: Azure Application Insights Data Model - Trace Telemetry
description: Application Insights data model for trace telemetry Last updated 04/25/2017- # Trace telemetry: Application Insights data model
azure-monitor Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model.md
ibiza Last updated 10/14/2019- # Application Insights telemetry data model
azure-monitor Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing.md
description: Provides information about Microsoft's support for distributed trac
Last updated 09/17/2018- # What is Distributed Tracing?
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Title: Monitor Python applications with Azure Monitor | Microsoft Docs
description: Provides instructions to wire up OpenCensus Python with Azure Monitor Last updated 10/12/2021- ms.devlang: python
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
Title: Log-based and pre-aggregated metrics in Azure Application Insights | Micr
description: Why to use log-based versus pre-aggregated metrics in Azure Application Insights Last updated 09/18/2018- # Log-based and pre-aggregated metrics in Application Insights
azure-monitor Proactive Arm Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-arm-config.md
Title: Smart detection rule settings - Azure Application Insights
description: Automate management and configuration of Azure Application Insights smart detection rules with Azure Resource Manager Templates Last updated 02/14/2021- # Manage Application Insights smart detection rules using Azure Resource Manager templates
azure-monitor Proactive Email Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-email-notification.md
Title: Smart Detection notification change - Azure Application Insights
description: Change to the default notification recipients from Smart Detection. Smart Detection lets you monitor application traces with Azure Application Insights for unusual patterns in trace telemetry. Last updated 02/14/2021- # Smart Detection e-mail notification change
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-performance-diagnostics.md
Title: Smart detection - performance anomalies | Microsoft Docs
description: Smart detection analyzes your app telemetry and warns you of potential problems. This feature needs no setup. Last updated 05/04/2017- # Smart detection - Performance Anomalies
azure-monitor Profiler Aspnetcore Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-aspnetcore-linux.md
ms.devlang: csharp Last updated 02/23/2018- # Profile ASP.NET Core Azure Linux web apps with Application Insights Profiler
azure-monitor Profiler Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-bring-your-own-storage.md
Title: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Debugger
description: Configure BYOS (Bring Your Own Storage) for Profiler & Snapshot Debugger Last updated 01/14/2021- # Configure Bring Your Own Storage (BYOS) for Application Insights Profiler and Snapshot Debugger
azure-monitor Profiler Cloudservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-cloudservice.md
description: Enable Application Insights Profiler for Azure Cloud Services.
Last updated 08/06/2018- # Profile live Azure Cloud Services with Application Insights
azure-monitor Profiler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-overview.md
Title: Profile production apps in Azure with Application Insights Profiler
description: Identify the hot path in your web server code with a low-footprint profiler. Last updated 08/06/2018- # Profile production applications in Azure with Application Insights
azure-monitor Profiler Servicefabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-servicefabric.md
description: Enable Profiler for a Service Fabric application
Last updated 08/06/2018- # Profile live Azure Service Fabric applications with Application Insights
azure-monitor Profiler Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-settings.md
Title: Use the Azure Application Insights Profiler settings pane | Microsoft Doc
description: See Profiler status and start profiling sessions Last updated 12/08/2021- # Configure Application Insights Profiler
azure-monitor Profiler Trackrequests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-trackrequests.md
description: Write code to track requests with Application Insights so you can g
Last updated 08/06/2018- # Write code to track requests with Application Insights
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-troubleshooting.md
Title: Troubleshoot problems with Azure Application Insights Profiler
description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Profiler. Last updated 08/06/2018- # Troubleshoot problems enabling or viewing Application Insights Profiler
azure-monitor Profiler Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-vm.md
Title: Profile web apps on an Azure VM - Application Insights Profiler
description: Profile web apps on an Azure VM by using Application Insights Profiler. Last updated 11/08/2019- # Profile web apps running on an Azure virtual machine or a virtual machine scale set by using Application Insights Profiler
azure-monitor Snapshot Debugger Appservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-appservice.md
Title: Enable Snapshot Debugger for .NET apps in Azure App Service | Microsoft D
description: Enable Snapshot Debugger for .NET apps in Azure App Service Last updated 03/26/2019- # Enable Snapshot Debugger for .NET apps in Azure App Service
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-troubleshoot.md
Title: Troubleshoot Azure Application Insights Snapshot Debugger
description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Snapshot Debugger. Last updated 03/07/2019- # <a id="troubleshooting"></a> Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots
azure-monitor Snapshot Debugger Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-upgrade.md
Title: Upgrading Azure Application Insights Snapshot Debugger
description: How to upgrade Snapshot Debugger for .NET apps to the latest version on Azure App Services, or via Nuget packages Last updated 03/28/2019- # Upgrading the Snapshot Debugger
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-vm.md
Title: Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Ser
description: Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines Last updated 03/07/2019- # Enable Snapshot Debugger for .NET apps in Azure Service Fabric, Cloud Service, and Virtual Machines
azure-monitor Telemetry Channels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/telemetry-channels.md
Last updated 05/14/2019 ms.devlang: csharp - # Telemetry channels in Application Insights
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
You can access Container insights two ways, from Azure Monitor or directly from
![Overview of methods to access Container insights](./media/container-insights-overview/azmon-containers-experience.png)
-If you are interested in monitoring and managing your Docker and Windows container hosts running outside of AKS to view configuration, audit, and resource utilization, see the [Container Monitoring solution](./containers.md).
- ## Next steps To begin monitoring your Kubernetes cluster, review [How to enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
description: List of metrics available for each resource type with Azure Monitor
Previously updated : 03/03/2022 Last updated : 04/12/2022
The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analyti
This latest update adds a new column and reorders the metrics to be alphabetical. The additional information means that the tables might have a horizontal scroll bar at the bottom, depending on the width of your browser window. If you seem to be missing information, use the scroll bar to see the entirety of the table. + ## microsoft.aadiam/azureADMetrics |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|UnusableNodeCount|No|Unusable Node Count|Count|Total|Number of unusable nodes|No Dimensions| |WaitingForStartTaskNodeCount|No|Waiting For Start Task Node Count|Count|Total|Number of nodes waiting for the Start Task to complete|No Dimensions| + ## Microsoft.BatchAI/workspaces |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Unusable Cores|Yes|Unusable Cores|Count|Average|Number of unusable cores|Scenario, ClusterName| |Unusable Nodes|Yes|Unusable Nodes|Count|Average|Number of unusable nodes|Scenario, ClusterName| + ## microsoft.bing/accounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalCalls|Yes|Total Calls|Count|Total|Total number of calls|ApiName, ServingRegion, StatusCode| |TotalErrors|Yes|Total Errors|Count|Total|Number of calls with any error (HTTP status code 4xx or 5xx)|ApiName, ServingRegion, StatusCode| + ## Microsoft.Blockchain/blockchainMembers |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestHandled|Yes|Handled Requests|Count|Total|Handled Requests|Node| |StorageUsage|Yes|Storage Usage|Bytes|Average|Storage Usage|Node| + ## microsoft.botservice/botservices |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|connectedclients7|Yes|Connected Clients (Shard 7)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions| |connectedclients8|Yes|Connected Clients (Shard 8)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions| |connectedclients9|Yes|Connected Clients (Shard 9)|Count|Maximum|The number of client connections to the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
-|errors|Yes|Errors|Count|Maximum|The number errors that occurred on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, ErrorType|
+|errors|Yes|Errors|Count|Maximum|The number errors that occured on the cache. For more details, see https://aka.ms/redis/metrics.|ShardId, ErrorType|
|evictedkeys|Yes|Evicted Keys|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|ShardId| |evictedkeys0|Yes|Evicted Keys (Shard 0)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions| |evictedkeys1|Yes|Evicted Keys (Shard 1)|Count|Total|The number of items evicted from the cache. For more details, see https://aka.ms/redis/metrics.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|errors|Yes|Errors|Count|Maximum||InstanceId, ErrorType| |evictedkeys|Yes|Evicted Keys|Count|Total||No Dimensions| |expiredkeys|Yes|Expired Keys|Count|Total||No Dimensions|
+|geoReplicationHealthy|Yes|Geo Replication Healthy|Count|Maximum||No Dimensions|
|getcommands|Yes|Gets|Count|Total||No Dimensions| |operationsPerSecond|Yes|Operations Per Second|Count|Maximum||No Dimensions| |percentProcessorTime|Yes|CPU|Percent|Maximum||InstanceId|
This latest update adds a new column and reorders the metrics to be alphabetical
|TableEntityCount|Yes|Table Entity Count|Count|Average|The number of table entities in the storage account's Table service.|No Dimensions| |Transactions|Yes|Transactions|Count|Total|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response.|ResponseType, GeoType, ApiName, Authentication| + ## Microsoft.Cloudtest/hostedpools |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalRead|No|TotalRead|BytesPerSecond|Average|The total lustre file system read per second|filesystem_name, category, system| |TotalWrite|No|TotalWrite|BytesPerSecond|Average|The total lustre file system write per second|filesystem_name, category, system| + ## Microsoft.CognitiveServices/accounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|APIRequestAuthentication|No|Authentication API Requests|Count|Count|Count of all requests against the Communication Services Authentication endpoint.|Operation, StatusCode, StatusCodeClass| |APIRequestChat|Yes|Chat API Requests|Count|Count|Count of all requests against the Communication Services Chat endpoint.|Operation, StatusCode, StatusCodeClass| |APIRequestNetworkTraversal|No|Network Traversal API Requests|Count|Count|Count of all requests against the Communication Services Network Traversal endpoint.|Operation, StatusCode, StatusCodeClass|
-|APIRequestSMS|Yes|SMS API Requests|Count|Count|Count of all requests against the Communication Services SMS endpoint.|Operation, StatusCode, StatusCodeClass, ErrorCode|
+|APIRequestSMS|Yes|SMS API Requests|Count|Count|Count of all requests against the Communication Services SMS endpoint.|Operation, StatusCode, StatusCodeClass, ErrorCode, NumberType|
## Microsoft.Compute/cloudServices
This latest update adds a new column and reorders the metrics to be alphabetical
|Network Out Total|Yes|Network Out Total|Bytes|Total|The number of bytes out on all network interfaces by the Virtual Machine(s) (Outgoing Traffic)|RoleInstanceId, RoleId| |Percentage CPU|Yes|Percentage CPU|Percent|Average|The percentage of allocated compute units that are currently in use by the Virtual Machine(s)|RoleInstanceId, RoleId| + ## microsoft.compute/disks |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Composite Disk Write Bytes/sec|No|Disk Write Bytes/sec(Preview)|Bytes|Average|Bytes/sec written to disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions| |Composite Disk Write Operations/sec|No|Disk Write Operations/sec(Preview)|Bytes|Average|Number of Write IOs performed on a disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available|No Dimensions| + ## Microsoft.Compute/virtualMachines |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|VM Uncached IOPS Consumed Percentage|Yes|VM Uncached IOPS Consumed Percentage|Percent|Average|Percentage of uncached disk IOPS consumed by the VM|No Dimensions|
+## Microsoft.ConnectedCache/CacheNodes
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|egressbps|Yes|Egress Mbps|BitsPerSecond|Average|Egress Throughput|cachenodeid|
+|hitRatio|Yes|Hit Ratio|Percent|Average|Hit Ratio|cachenodeid|
+|hits|Yes|Hits|Count|Count|Count of hits|cachenodeid|
+|hitsbps|Yes|Hit Mbps|BitsPerSecond|Average|Hit Throughput|cachenodeid|
+|misses|Yes|Misses|Count|Count|Count of misses|cachenodeid|
+|missesbps|Yes|Miss Mbps|BitsPerSecond|Average|Miss Throughput|cachenodeid|
++ ## Microsoft.ConnectedVehicle/platformAccounts |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|MongoRequestCharge|Yes|Mongo Request Charge|Count|Total|Mongo Request Units Consumed|DatabaseName, CollectionName, Region, CommandName, ErrorCode, Status| |MongoRequests|Yes|Mongo Requests|Count|Count|Number of Mongo Requests Made|DatabaseName, CollectionName, Region, CommandName, ErrorCode, Status| |NormalizedRUConsumption|No|Normalized RU Consumption|Percent|Maximum|Max RU consumption percentage per minute|CollectionName, DatabaseName, Region, PartitionKeyRangeId, CollectionRid|
+|OfflineRegion|No|Region Offlined|Count|Count|Region Offlined|Region, StatusCode, Role, OperationName|
+|OnlineRegion|No|Region Onlined|Count|Count|Region Onlined|Region, StatusCode, Role, OperationName|
|ProvisionedThroughput|No|Provisioned Throughput|Count|Maximum|Provisioned Throughput|DatabaseName, CollectionName| |RegionFailover|Yes|Region Failed Over|Count|Count|Region Failed Over|No Dimensions| |RemoveRegion|Yes|Region Removed|Count|Count|Region Removed|Region|
This latest update adds a new column and reorders the metrics to be alphabetical
|NormalizedEvent|Yes|Number of Normalized Messages|Count|Sum|The total number of mapped normalized values outputted from the normalization stage of the the Azure IoT Connector for FHIR.|Operation, ResourceName| |TotalErrors|Yes|Total Error Count|Count|Sum|The total number of errors logged by the Azure IoT Connector for FHIR|Name, Operation, ErrorType, ErrorSeverity, ResourceName| + ## microsoft.hybridnetwork/networkfunctions |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|HyperVVirtualProcessorUtilization|Yes|Average CPU Utilization|Percent|Average|Total average percentage of virtual CPU utilization at one minute interval. The total number of virtual CPU is based on user configured value in SKU definition. Further filter can be applied based on RoleName defined in SKU.|InstanceName|
+|HyperVVirtualProcessorUtilization|Yes|Average CPU Utilization|Percent|Average|Total average percentage of virtual CPU utilization at one minute interval. The total number of virtual CPU is based on user configured value in SKU definition. Further filter can be applied based on RoleName defined in SKU.|InstanceName
+ ## microsoft.insights/autoscalesettings
This latest update adds a new column and reorders the metrics to be alphabetical
|ServiceApiLatency|Yes|Overall Service Api Latency|MilliSeconds|Average|Overall latency of service api requests|ActivityType, ActivityName, StatusCode, StatusCodeClass| |ServiceApiResult|Yes|Total Service Api Results|Count|Count|Number of total service api results|ActivityType, ActivityName, StatusCode, StatusCodeClass| + ## microsoft.kubernetes/connectedClusters |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |capacity_cpu_cores|Yes|Total number of cpu cores in a connected cluster|Count|Total|Total number of cpu cores in a connected cluster|No Dimensions| + ## Microsoft.Kusto/Clusters |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|ContentKeyPolicyQuotaUsedPercentage|Yes|Content Key Policy quota used percentage|Percent|Average|Content Key Policy used percentage in current media service account|No Dimensions| |JobQuota|Yes|Job quota|Count|Average|The Job quota for the current media service account.|No Dimensions| |JobsScheduled|Yes|Jobs Scheduled|Count|Average|The number of Jobs in the Scheduled state. Counts on this metric only reflect jobs submitted through the v3 API. Jobs submitted through the v2 (Legacy) API are not counted.|No Dimensions|
+|KeyDeliveryRequests|No|Key request time|Count|Average|The key delivery request status and latency in milliseconds for the current Media Service account.|KeyType, HttpStatusCode|
|MaxChannelsAndLiveEventsCount|Yes|Max live event quota|Count|Average|The maximum number of live events allowed in the current media services account|No Dimensions| |MaxRunningChannelsAndLiveEventsCount|Yes|Max running live event quota|Count|Average|The maximum number of running live events allowed in the current media services account|No Dimensions| |RunningChannelsAndLiveEventsCount|Yes|Running live event count|Count|Average|The total number of running live events in the current media services account|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |ApplicationGatewayTotalTime|No|Application Gateway Total Time|MilliSeconds|Average|Average time that it takes for a request to be processed and its response to be sent. This is calculated as average of the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that this usually includes the Application Gateway processing time, time that the request and response packets are traveling over the network and the time the backend server took to respond.|Listener| |AvgRequestCountPerHealthyHost|No|Requests per minute per Healthy Host|Count|Average|Average request count per minute per healthy backend host in a pool|BackendSettingsPool|
+|AzwafBotProtection|Yes|WAF Bot Protection Matches|Count|Total|Matched Bot Rules|Action, Category, Mode, CountryCode|
+|AzwafCustomRule|Yes|WAF Custom Rule Matches|Count|Total|Matched Custom Rules|Action, CustomRuleID, Mode, CountryCode|
+|AzwafSecRule|Yes|WAF Managed Rule Matches|Count|Total|Matched Managed Rules|Action, Mode, RuleGroupID, RuleID, CountryCode|
+|AzwafTotalRequests|Yes|WAF Total Requests|Count|Total|Total number of requests evaluated by WAF|Action, CountryCode, Method, Mode|
|BackendConnectTime|No|Backend Connect Time|MilliSeconds|Average|Time spent establishing a connection with a backend server|Listener, BackendServer, BackendPool, BackendHttpSetting| |BackendFirstByteResponseTime|No|Backend First Byte Response Time|MilliSeconds|Average|Time interval between start of establishing a connection to backend server and receiving the first byte of the response header, approximating processing time of backend server|Listener, BackendServer, BackendPool, BackendHttpSetting| |BackendLastByteResponseTime|No|Backend Last Byte Response Time|MilliSeconds|Average|Time interval between start of establishing a connection to backend server and receiving the last byte of the response body|Listener, BackendServer, BackendPool, BackendHttpSetting|
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |ByteCount|Yes|Bytes|Bytes|Total|Total number of Bytes transmitted within time period|Protocol, Direction|
-|DatapathAvailability|Yes|Datapath Availability (Preview)|Count|Average|NAT Gateway Datapath Availability|No Dimensions|
+|DatapathAvailability|Yes|Datapath Availability (Preview)|Count|PortalAverage|NAT Gateway Datapath Availability|No Dimensions|
|PacketCount|Yes|Packets|Count|Total|Total number of Packets transmitted within time period|Protocol, Direction| |PacketDropCount|Yes|Dropped Packets|Count|Total|Count of dropped packets|No Dimensions| |SNATConnectionCount|Yes|SNAT Connection Count|Count|Total|Total concurrent active connections|Protocol, ConnectionState|
This latest update adds a new column and reorders the metrics to be alphabetical
|ProbeAgentCurrentEndpointStateByProfileResourceId|Yes|Endpoint Status by Endpoint|Count|Maximum|1 if an endpoint's probe status is "Enabled", 0 otherwise.|EndpointName| |QpsByEndpoint|Yes|Queries by Endpoint Returned|Count|Total|Number of times a Traffic Manager endpoint was returned in the given time frame|EndpointName| + ## Microsoft.Network/virtualHubs |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|CountOfRoutesAdvertisedToPeer|No|Count Of Routes Advertised To Peer|Count|Maximum|Total number of routes advertised to peer|routeserviceinstance, bgppeerip, bgppeertype| |CountOfRoutesLearnedFromPeer|No|Count Of Routes Learned From Peer|Count|Maximum|Total number of routes learned from peer|routeserviceinstance, bgppeerip, bgppeertype| + ## microsoft.network/virtualnetworkgateways |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|BgpPeerStatus|No|BGP Peer Status|Count|Average|Status of BGP peer|BgpPeerAddress, Instance| |BgpRoutesAdvertised|Yes|BGP Routes Advertised|Count|Total|Count of Bgp Routes Advertised through tunnel|BgpPeerAddress, Instance| |BgpRoutesLearned|Yes|BGP Routes Learned|Count|Total|Count of Bgp Routes Learned through tunnel|BgpPeerAddress, Instance|
-|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer(Preview)|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRouteGateway|roleInstance|
-|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer (Preview)|Count|Maximum|Count Of Routes Learned From Peer by ExpressRouteGateway|roleInstance|
+|ExpressRouteGatewayBitsPerSecond|No|Bits Received Per second|BitsPerSecond|Average|Total Bits received on ExpressRoute Gateway per second|roleInstance|
+|ExpressRouteGatewayCountOfRoutesAdvertisedToPeer|Yes|Count Of Routes Advertised to Peer|Count|Maximum|Count Of Routes Advertised To Peer by ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayCountOfRoutesLearnedFromPeer|Yes|Count Of Routes Learned from Peer|Count|Maximum|Count Of Routes Learned From Peer by ExpressRoute Gateway|roleInstance|
|ExpressRouteGatewayCpuUtilization|Yes|CPU utilization|Percent|Average|CPU Utilization of the ExpressRoute Gateway|roleInstance|
-|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change (Preview)|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance|
-|ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network (Preview)|Count|Maximum|Number of VMs in the Virtual Network|roleInstance|
-|ExpressRouteGatewayPacketsPerSecond|No|Packets per second|CountPerSecond|Average|Packet count of ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayFrequencyOfRoutesChanged|No|Frequency of Routes change|Count|Total|Frequency of Routes change in ExpressRoute Gateway|roleInstance|
+|ExpressRouteGatewayNumberOfVmInVnet|No|Number of VMs in the Virtual Network|Count|Maximum|Number of VMs in the Virtual Network|roleInstance|
+|ExpressRouteGatewayPacketsPerSecond|No|Packets received per second|CountPerSecond|Average|Total Packets received on ExpressRoute Gateway per second|roleInstance|
|MmsaCount|Yes|Tunnel MMSA Count|Count|Total|MMSA Count|ConnectionName, RemoteIP, Instance| |P2SBandwidth|Yes|Gateway P2S Bandwidth|BytesPerSecond|Average|Point-to-site bandwidth of a gateway in bytes per second|Instance| |P2SConnectionCount|Yes|P2S Connection Count|Count|Total|Point-to-site connection count of a gateway|Protocol, Instance|
This latest update adds a new column and reorders the metrics to be alphabetical
|SenderConnections-TotalRequests|No|SenderConnections-TotalRequests|Count|Total|Total SenderConnections requests for Microsoft.Relay.|EntityName| |SenderDisconnects|No|SenderDisconnects|Count|Total|Total SenderDisconnects for Microsoft.Relay.|EntityName| + ## microsoft.resources/subscriptions |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|Latency|No|Latency|Seconds|Average|Latency data for all requests to Azure Resource Manager|IsCustomerOriginated, Method, Namespace, RequestRegion, ResourceType, StatusCode, StatusCodeClass, Microsoft.SubscriptionId| |Traffic|No|Traffic|Count|Count|Traffic data for all requests to Azure Resource Manager|IsCustomerOriginated, Method, Namespace, RequestRegion, ResourceType, StatusCode, StatusCodeClass, Microsoft.SubscriptionId| + ## Microsoft.Search/searchServices |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|PendingCheckpointOperationCount|No|Pending Checkpoint Operations Count.|Count|Total|Pending Checkpoint Operations Count.|No Dimensions| |ScheduledMessages|No|Count of scheduled messages in a Queue/Topic.|Count|Average|Count of scheduled messages in a Queue/Topic.|EntityName| |ServerErrors|No|Server Errors.|Count|Total|Server Errors for Microsoft.ServiceBus.|EntityName, OperationResult|
+|ServerSendLatency|No|Server Send Latency.|MilliSeconds|Average|Latency of Send Message operations for Service Bus resources.|EntityName|
|Size|No|Size|Bytes|Average|Size of an Queue/Topic in Bytes.|EntityName| |SuccessfulRequests|No|Successful Requests|Count|Total|Total successful requests for a namespace|EntityName, OperationResult| |ThrottledRequests|No|Throttled Requests.|Count|Total|Throttled Requests for Microsoft.ServiceBus.|EntityName, OperationResult, MessagingErrorSubCode|
This latest update adds a new column and reorders the metrics to be alphabetical
|app_cpu_billed|Yes|App CPU billed|Count|Total|App CPU billed. Applies to serverless databases.|No Dimensions| |app_cpu_percent|Yes|App CPU percentage|Percent|Average|App CPU percentage. Applies to serverless databases.|No Dimensions| |app_memory_percent|Yes|App memory percentage|Percent|Average|App memory percentage. Applies to serverless databases.|No Dimensions|
-|base_blob_size_bytes|Yes|Base blob storage size|Bytes|Maximum|Base blob storage size. Applies to Hyperscale databases.|No Dimensions|
+|base_blob_size_bytes|Yes|Data storage size|Bytes|Maximum|Data storage size. Applies to Hyperscale databases.|No Dimensions|
|blocked_by_firewall|Yes|Blocked by Firewall|Count|Total|Blocked by Firewall|No Dimensions| |cache_hit_percent|Yes|Cache hit percentage|Percent|Maximum|Cache hit percentage. Applies only to data warehouses.|No Dimensions| |cache_used_percent|Yes|Cache used percentage|Percent|Maximum|Cache used percentage. Applies only to data warehouses.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|physical_data_read_percent|Yes|Data IO percentage|Percent|Average|Data IO percentage|No Dimensions| |queued_queries|Yes|Queued queries|Count|Total|Queued queries across all workload groups. Applies only to data warehouses.|No Dimensions| |sessions_percent|Yes|Sessions percentage|Percent|Average|Sessions percentage. Not applicable to data warehouses.|No Dimensions|
-|snapshot_backup_size_bytes|Yes|Snapshot backup storage size|Bytes|Maximum|Cumulative snapshot backup storage size. Applies to Hyperscale databases.|No Dimensions|
+|snapshot_backup_size_bytes|Yes|Data backup storage size|Bytes|Maximum|Cumulative data backup storage size. Applies to Hyperscale databases.|No Dimensions|
|sqlserver_process_core_percent|Yes|SQL Server process core percent|Percent|Maximum|CPU usage as a percentage of the SQL DB process. Not applicable to data warehouses.|No Dimensions| |sqlserver_process_memory_percent|Yes|SQL Server process memory percent|Percent|Maximum|Memory usage as a percentage of the SQL DB process. Not applicable to data warehouses.|No Dimensions| |storage|Yes|Data space used|Bytes|Maximum|Data space used. Not applicable to data warehouses.|No Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|SiteErrors|Yes|SiteErrors|Count|Total|SiteErrors|Instance| |SiteHits|Yes|SiteHits|Count|Total|SiteHits|Instance| + ## Wandisco.Fusion/migrators |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalMigratedDataInBytes|Yes|Total Migrated Data in Bytes|Bytes|Total|This provides a view of the successfully migrated Bytes for a given migrator|No Dimensions| |TotalTransactions|Yes|Total Transactions|Count|Total|This provides a running total of the Data Transactions for which the user could be billed.|No Dimensions| + ## Next steps - [Read about metrics in Azure Monitor](../data-platform.md)
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
As soon as you create an Azure resource, Azure Monitor is enabled and starts col
- [Azure Monitor Metrics](../essentials/data-platform-metrics.md) stores numeric data from monitored resources into a time series database. The metric database is automatically created for each Azure subscription. Use [metrics explorer](../essentials/tutorial-metrics.md) to analyze data from Azure Monitor Logs. - [Azure Monitor Logs](../logs/data-platform-logs.md) collects logs and performance data where they can be retrieved and analyzed in a different ways using log queries. You must create a Log Analytics workspace to collect log data. Use [Log Analytics](../logs/log-analytics-tutorial.md) to analyze data from Azure Monitor Logs.
-### Monitoring data from Azure resources
+### <a id="monitoring-data-from-azure-resources"></a> Monitor data from Azure resources
While resources from different Azure services have different monitoring requirements, they generate monitoring data in the same formats so that you can use the same Azure Monitor tools to analyze all Azure resources.
-Azure resources generate the following monitoring data:
+Diagnostic settings define where resource logs and metrics for a particular resource should be sent. Possible destinations are:
- [Activity log](./platform-logs-overview.md) - Subscription level events that track operations for each Azure resource, for example creating a new resource or starting a virtual machine. Activity log events are automatically generated and collected for viewing in the Azure portal. You can create a diagnostic setting to send the Activity log to Azure Monitor Logs. - [Platform metrics](../essentials/data-platform-metrics.md) - Numerical values that are automatically collected at regular intervals and describe some aspect of a resource at a particular time. Platform metrics are automatically generated and collected in Azure Monitor Metrics.
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 03/03/2022 Last updated : 04/12/2022
If you think something is missing, you can open a GitHub comment at the bottom o
|PrivilegeUse|PrivilegeUse|No| |SystemSecurity|SystemSecurity|No| - ## microsoft.aadiam/tenants |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|ModelInferenceLogs|Model Inference Logs|Yes| |ProviderAuthLogs|Provider Auth Logs|Yes| |SatelliteLogs|Satellite Logs|Yes|
+|SensorManagementLogs|Sensor Management Logs|Yes|
|WeatherLogs|Weather Logs|Yes|
If you think something is missing, you can open a GitHub comment at the bottom o
|||| |ApplicationConsole|Application Console|No| |BuildLogs|Build Logs|Yes|
+|ContainerEventLogs|Container Event Logs|Yes|
|IngressLogs|Ingress Logs|Yes| |SystemLogs|System Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|||| |BlockchainApplication|Blockchain Application|No| - ## microsoft.botservice/botservices |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|Usage|Usage Records|No|
+## Microsoft.ConnectedCache/CacheNodes
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Events|Events|Yes|
++ ## Microsoft.ConnectedVehicle/platformAccounts |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|||| |accounts|Databricks Accounts|No| |clusters|Databricks Clusters|No|
+|databrickssql|Databricks DatabricksSQL|Yes|
|dbfs|Databricks File System|No|
+|deltaPipelines|Databricks Delta Pipelines|Yes|
|featureStore|Databricks Feature Store|Yes| |genie|Databricks Genie|Yes| |globalInitScripts|Databricks Global Init Scripts|Yes|
If you think something is missing, you can open a GitHub comment at the bottom o
|jobs|Databricks Jobs|No| |mlflowAcledArtifact|Databricks MLFlow Acled Artifact|Yes| |mlflowExperiment|Databricks MLFlow Experiment|Yes|
+|modelRegistry|Databricks Model Registry|Yes|
|notebook|Databricks Notebook|No| |RemoteHistoryService|Databricks Remote History Service|Yes|
+|repos|Databricks Repos|Yes|
|secrets|Databricks Secrets|No| |sqlanalytics|Databricks SQL Analytics|Yes| |sqlPermissions|Databricks SQLPermissions|No| |ssh|Databricks SSH|No|
+|unityCatalog|Databricks SQL Analytics|Yes|
|workspace|Databricks Workspace|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |ActivityRuns|Pipeline activity runs log|No|
-|AirflowDagProcessingLogs|Airflow dag processing logs|Yes|
-|AirflowSchedulerLogs|Airflow scheduler logs|Yes|
-|AirflowTaskLogs|Airflow task execution logs|Yes|
-|AirflowWebLogs|Airflow web logs|Yes|
-|AirflowWorkerLogs|Airflow worker logs|Yes|
|PipelineRuns|Pipeline runs log|No| |SandboxActivityRuns|Sandbox Activity runs log|Yes| |SandboxPipelineRuns|Sandbox Pipeline runs log|Yes|
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |Audit|Audit Logs|No|
+|ConfigurationChange|Configuration Change Event Logs|Yes|
+|JobEvent|Job Event Logs|Yes|
|JobInfo|Job Info Logs|Yes| |Requests|Request Logs|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|OperationalLogs|Operational Logs|No|
+## MICROSOFT.OPENENERGYPLATFORM/ENERGYSERVICES
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AirFlowTaskLogs|Air Flow Task Logs|Yes|
++ ## Microsoft.OpenLogisticsPlatform/Workspaces |Category|Category Display Name|Costs To Export|
If you think something is missing, you can open a GitHub comment at the bottom o
|Management|Management|No|
+## microsoft.videoindexer/accounts
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit|Audit|Yes|
++ ## Microsoft.Web/hostingEnvironments |Category|Category Display Name|Costs To Export|
azure-monitor Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-sql.md
description: Azure SQL Analytics solution helps you manage your Azure SQL databa
Previously updated : 11/22/2021 Last updated : 03/10/2022
-# Monitor Azure SQL Database using Azure SQL Analytics (Preview)
+# Monitor Azure SQL Database using Azure SQL Analytics (preview)
> [!CAUTION] > Azure SQL Analytics (preview) is an integration with Azure Monitor, where many monitoring solutions are no longer in active development. For more monitoring options, see [Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance](../../azure-sql/database/monitor-tune-overview.md).
-![Azure SQL Analytics symbol](./media/azure-sql/azure-sql-symbol.png)
- Azure SQL Analytics (preview) is an advanced cloud monitoring solution for monitoring performance of all of your Azure SQL databases at scale and across multiple subscriptions in a single view. Azure SQL Analytics collects and visualizes key performance metrics with built-in intelligence for performance troubleshooting. By using these collected metrics, you can create custom monitoring rules and alerts. Azure SQL Analytics helps you to identify issues at each layer of your application stack. Azure SQL Analytics uses [Azure Diagnostics](../agents/diagnostics-extension-overview.md) metrics along with Azure Monitor views to present data about all your Azure SQL databases in a single Log Analytics workspace. Azure Monitor helps you to collect, correlate, and visualize structured and unstructured data.
azure-monitor Resource Manager Sql Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/resource-manager-sql-insights.md
Title: Resource Manager template samples for SQL insights
-description: Sample Azure Resource Manager templates to deploy and configure SQL insights.
+ Title: Resource Manager template samples for SQL Insights (preview)
+description: Sample Azure Resource Manager templates to deploy and configure SQL Insights (preview).
Last updated 03/25/2021
-# Resource Manager template samples for SQL insights
-This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to enable SQL insights for monitoring SQL running in Azure. See the [SQL insights documentation](sql-insights-overview.md) for details on the offering and versions of SQL we support. Each sample includes a template file and a parameters file with sample values to provide to the template.
+# Resource Manager template samples for SQL Insights (preview)
+This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to enable SQL Insights (preview) for monitoring SQL running in Azure. See the [SQL Insights (preview) documentation](sql-insights-overview.md) for details on the offering and versions of SQL we support. Each sample includes a template file and a parameters file with sample values to provide to the template.
[!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)]
-## Create a SQL insights monitoring profile
-The following sample creates a SQL insights monitoring profile, which includes the SQL monitoring data to collect, frequency of data collection, and specifies the workspace the data will be sent to.
+## Create a SQL Insights (preview) monitoring profile
+The following sample creates a SQL Insights monitoring profile, which includes the SQL monitoring data to collect, frequency of data collection, and specifies the workspace the data will be sent to.
### Template file
View the [template file on git hub](https://github.com/microsoft/Application-Ins
View the [parameter file on git hub](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Workbooks/Workloads/SQL/Create%20new%20profile/CreateNewProfile.parameters.json).
-## Add a monitoring VM to a SQL insights monitoring profile
-Once you have created a monitoring profile, you need to allocate Azure virtual machines that will be configured to remotely collect data from the SQL resources you specify in the configuration for that VM. Refer to the SQL insights enable documentation for more details.
+## Add a monitoring VM to a SQL Insights monitoring profile
+Once you have created a monitoring profile, you need to allocate Azure virtual machines that will be configured to remotely collect data from the SQL resources you specify in the configuration for that VM. Refer to the SQL Insights enable documentation for more details.
The following sample configures a monitoring VM to collect the data from the specified SQL resources.
View the [template file on git hub](https://github.com/microsoft/Application-Ins
View the [parameter file on git hub](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Workbooks/Workloads/SQL/Add%20monitoring%20virtual%20machine/AddMonitoringVirtualMachine.parameters.json).
-## Create an alert rule for SQL insights
-The following sample creates an alert rule that will cover the SQL resources within the scope of the specified monitoring profile. This alert rule will appear in the SQL insights UI in the alerts UI context panel.
+## Create an alert rule for SQL Insights
+The following sample creates an alert rule that will cover the SQL resources within the scope of the specified monitoring profile. This alert rule will appear in the SQL Insights UI in the alerts UI context panel.
-The parameter file has values from one of the alert templates we provide in SQL insights, you can modify it to alert on other data we collect for SQL. The template does not specify an action group for the alert rule.
+The parameter file has values from one of the alert templates we provide in SQL Insights, you can modify it to alert on other data we collect for SQL. The template does not specify an action group for the alert rule.
#### Template file
View the [parameter file on git hub](https://github.com/microsoft/Application-In
## Next steps * [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
-* [Learn more about SQL insights](sql-insights-overview.md).
+* [Learn more about SQL Insights (preview)](sql-insights-overview.md).
azure-monitor Sql Insights Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-alerts.md
Title: Create alerts with SQL insights (preview)
-description: Create alerts with SQL insights in Azure Monitor
+ Title: Create alerts with SQL Insights (preview)
+description: Create alerts with SQL Insights (preview) in Azure Monitor
Last updated 03/12/2021
-# Create alerts with SQL insights (preview)
-SQL insights includes a set of alert rule templates you can use to create [alert rules in Azure Monitor](../alert/../alerts/alerts-overview.md) for common SQL issues. The alert rules in SQL insights are log alert rules based on performance data stored in the *InsightsMetrics* table in Azure Monitor Logs.
+# Create alerts with SQL Insights (preview)
+SQL Insights (preview) includes a set of alert rule templates you can use to create [alert rules in Azure Monitor](../alert/../alerts/alerts-overview.md) for common SQL issues. The alert rules in SQL Insights (preview) are log alert rules based on performance data stored in the *InsightsMetrics* table in Azure Monitor Logs.
> [!NOTE]
-> To create an alert for SQL insights using a resource manager template, see [Resource Manager template samples for SQL insights](resource-manager-sql-insights.md#create-an-alert-rule-for-sql-insights).
+> To create an alert for SQL Insights (preview) using a resource manager template, see [Resource Manager template samples for SQL Insights (preview)](resource-manager-sql-insights.md#create-an-alert-rule-for-sql-insights).
> [!NOTE]
-> If you have requests for more SQL insights alert rule templates, please send feedback using the link at the bottom of this page or using the SQL insights feedback link in the Azure portal.
+> If you have requests for more SQL Insights (preview) alert rule templates, please send feedback using the link at the bottom of this page or using the SQL Insights (preview) feedback link in the Azure portal.
## Enable alert rules Use the following steps to enable the alerts in Azure Monitor from the Azure portal. The alert rules that are created will be scoped to all of the SQL resources monitored under the selected monitoring profile. When an alert rule is triggered, it will trigger on the specific SQL instance or database.
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-enable.md
Title: Enable SQL insights
-description: Enable SQL insights in Azure Monitor
+ Title: Enable SQL Insights (preview)
+description: Enable SQL Insights (preview) in Azure Monitor
Last updated 1/18/2022
-# Enable SQL insights (preview)
-This article describes how to enable [SQL insights](sql-insights-overview.md) to monitor your SQL deployments. Monitoring is performed from an Azure virtual machine that makes a connection to your SQL deployments and uses Dynamic Management Views (DMVs) to gather monitoring data. You can control what datasets are collected and the frequency of collection using a monitoring profile.
+# Enable SQL Insights (preview)
+This article describes how to enable [SQL Insights (preview)](sql-insights-overview.md) to monitor your SQL deployments. Monitoring is performed from an Azure virtual machine that makes a connection to your SQL deployments and uses Dynamic Management Views (DMVs) to gather monitoring data. You can control what datasets are collected and the frequency of collection using a monitoring profile.
> [!NOTE]
-> To enable SQL insights by creating the monitoring profile and virtual machine using a resource manager template, see [Resource Manager template samples for SQL insights](resource-manager-sql-insights.md).
+> To enable SQL Insights (preview) by creating the monitoring profile and virtual machine using a resource manager template, see [Resource Manager template samples for SQL Insights (preview)](resource-manager-sql-insights.md).
-To learn how to enable SQL Insights, you can also refer to this Data Exposed episode.
+To learn how to enable SQL Insights (preview), you can also refer to this Data Exposed episode.
> [!VIDEO https://docs.microsoft.com/Shows/Data-Exposed/How-to-Set-up-Azure-Monitor-for-SQL-Insights/player?format=ny] ## Create Log Analytics workspace
-SQL insights stores its data in one or more [Log Analytics workspaces](../logs/data-platform-logs.md#log-analytics-workspaces). Before you can enable SQL Insights, you need to either [create a workspace](../logs/quick-create-workspace.md) or select an existing one. A single workspace can be used with multiple monitoring profiles, but the workspace and profiles must be located in the same Azure region. To enable and access the features in SQL insights, you must have the [Log Analytics contributor role](../logs/manage-access.md) in the workspace.
+SQL Insights stores its data in one or more [Log Analytics workspaces](../logs/data-platform-logs.md#log-analytics-workspaces). Before you can enable SQL Insights, you need to either [create a workspace](../logs/quick-create-workspace.md) or select an existing one. A single workspace can be used with multiple monitoring profiles, but the workspace and profiles must be located in the same Azure region. To enable and access the features in SQL Insights, you must have the [Log Analytics contributor role](../logs/manage-access.md) in the workspace.
## Create monitoring user You need a user (login) on the SQL deployments that you want to monitor. Follow the procedures below for different types of SQL deployments.
The instructions below cover the process per type of SQL that you can monitor. T
### Azure SQL Database > [!NOTE]
-> SQL insights does not support the following Azure SQL Database scenarios:
+> SQL Insights (preview) does not support the following Azure SQL Database scenarios:
> - **Elastic pools**: Metrics cannot be gathered for elastic pools. Metrics cannot be gathered for databases within elastic pools. > - **Low service tiers**: Metrics cannot be gathered for databases on Basic, S0, S1, and S2 [service tiers](../../azure-sql/database/resource-limits-dtu-single-databases.md) >
-> SQL insights has limited support for the following Azure SQL Database scenarios:
+> SQL Insights (preview) has limited support for the following Azure SQL Database scenarios:
> - **Serverless tier**: Metrics can be gathered for databases using the [serverless compute tier](../../azure-sql/database/serverless-tier-overview.md). However, the process of gathering metrics will reset the auto-pause delay timer, preventing the database from entering an auto-paused state. Connect to an Azure SQL database with [SQL Server Management Studio](../../azure-sql/database/connect-query-ssms.md), [Query Editor (preview)](../../azure-sql/database/connect-query-portal.md) in the Azure portal, or any other SQL client tool.
If you have these permissions, a new Key Vault access policy will be automatical
> You need to ensure that network and security configuration allows the monitoring VM to access Key Vault. For more information, see [Access Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md) and [Configure Azure Key Vault networking settings](../../key-vault/general/how-to-azure-key-vault-network-security.md). ## Create SQL monitoring profile
-Open SQL insights by selecting **SQL (preview)** from the **Insights** section of the **Azure Monitor** menu in the Azure portal. Click **Create new profile**.
+Open SQL Insights (preview) by selecting **SQL (preview)** from the **Insights** section of the **Azure Monitor** menu in the Azure portal. Click **Create new profile**.
:::image type="content" source="media/sql-insights-enable/create-new-profile.png" alt-text="Create new profile." lightbox="media/sql-insights-enable/create-new-profile.png":::
Select the subscription and name of your monitoring virtual machine. If you're u
:::image type="content" source="media/sql-insights-enable/add-monitoring-machine.png" alt-text="Add monitoring machine." lightbox="media/sql-insights-enable/add-monitoring-machine.png"::: ### Add connection strings
-The connection string specifies the login name that SQL insights should use when logging into SQL to collect monitoring data. If you're using a Key Vault to store the password for your monitoring user, provide the Key Vault URI and name of the secret that contains the password.
+The connection string specifies the login name that SQL Insights (preview) should use when logging into SQL to collect monitoring data. If you're using a Key Vault to store the password for your monitoring user, provide the Key Vault URI and name of the secret that contains the password.
The connections string will vary for each type of SQL resource:
WHERE net_transport = 'TCP'
Select **Add monitoring virtual machine** to configure the virtual machine to collect data from your SQL resources. Do not return to the **Overview** tab. In a few minutes, the Status column should change to read "Collecting", you should see data for the SQL resources you have chosen to monitor.
-If you do not see data, see [Troubleshooting SQL insights](sql-insights-troubleshoot.md) to identify the issue.
+If you do not see data, see [Troubleshooting SQL Insights (preview)](sql-insights-troubleshoot.md) to identify the issue.
:::image type="content" source="media/sql-insights-enable/profile-created.png" alt-text="Profile created" lightbox="media/sql-insights-enable/profile-created.png"::: > [!NOTE]
-> If you need to update your monitoring profile or the connection strings on your monitoring VMs, you may do so via the SQL insights **Manage profile** tab. Once your updates have been saved the changes will be applied in approximately 5 minutes.
+> If you need to update your monitoring profile or the connection strings on your monitoring VMs, you may do so via the SQL Insights (preview) **Manage profile** tab. Once your updates have been saved the changes will be applied in approximately 5 minutes.
## Next steps -- See [Troubleshooting SQL insights](sql-insights-troubleshoot.md) if SQL insights isn't working properly after being enabled.
+- See [Troubleshooting SQL Insights (preview)](sql-insights-troubleshoot.md) if SQL Insights isn't working properly after being enabled.
azure-monitor Sql Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-overview.md
Title: Monitor your SQL deployments with SQL insights (preview)
-description: Overview of SQL insights in Azure Monitor
+ Title: Monitor your SQL deployments with SQL Insights (preview)
+description: Overview of SQL Insights (preview) in Azure Monitor
Previously updated : 11/10/2021+ Last updated : 04/14/2022
-# Monitor your SQL deployments with SQL insights (preview)
-SQL insights is a comprehensive solution for monitoring any product in the [Azure SQL family](../../azure-sql/index.yml). SQL insights uses [dynamic management views](../../azure-sql/database/monitoring-with-dmvs.md) to expose the data that you need to monitor health, diagnose problems, and tune performance.
+# Monitor your SQL deployments with SQL Insights (preview)
-SQL insights performs all monitoring remotely. Monitoring agents on dedicated virtual machines connect to your SQL resources and remotely gather data. The gathered data is stored in [Azure Monitor Logs](../logs/data-platform-logs.md) to enable easy aggregation, filtering, and trend analysis. You can view the collected data from the SQL insights [workbook template](../visualize/workbooks-overview.md), or you can delve directly into the data by using [log queries](../logs/get-started-queries.md).
+SQL Insights (preview) is a comprehensive solution for monitoring any product in the [Azure SQL family](../../azure-sql/index.yml). SQL Insights uses [dynamic management views](../../azure-sql/database/monitoring-with-dmvs.md) to expose the data that you need to monitor health, diagnose problems, and tune performance.
+
+SQL Insights performs all monitoring remotely. Monitoring agents on dedicated virtual machines connect to your SQL resources and remotely gather data. The gathered data is stored in [Azure Monitor Logs](../logs/data-platform-logs.md) to enable easy aggregation, filtering, and trend analysis. You can view the collected data from the SQL Insights [workbook template](../visualize/workbooks-overview.md), or you can delve directly into the data by using [log queries](../logs/get-started-queries.md).
+The following diagram details the steps taken by information from the database engine and Azure resource logs, and how they can be surfaced. For a more detailed diagram of Azure SQL logging, see [Monitoring and diagnostic telemetry](../../azure-sql/database/monitor-tune-overview.md#monitoring-and-diagnostic-telemetry).
+ ## Pricing
-There is no direct cost for SQL insights. All costs are incurred by the virtual machines that gather the data, the Log Analytics workspaces that store the data, and any alert rules configured on the data.
+There is no direct cost for SQL Insights (preview). All costs are incurred by the virtual machines that gather the data, the Log Analytics workspaces that store the data, and any alert rules configured on the data.
### Virtual machines
For virtual machines, you're charged based on the pricing published on the [virt
### Log Analytics workspaces
-For the Log Analytics workspaces, you're charged based on the pricing published on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). The Log Analytics workspaces that SQL insights uses will incur costs for data ingestion, data retention, and (optionally) data export.
+For the Log Analytics workspaces, you're charged based on the pricing published on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). The Log Analytics workspaces that SQL Insights uses will incur costs for data ingestion, data retention, and (optionally) data export.
Exact charges will vary based on the amount of data ingested, retained, and exported. The amount of this data will vary based on your database activity and the collection settings defined in your [monitoring profiles](sql-insights-enable.md#create-sql-monitoring-profile). ### Alert rules
-For alert rules in Azure Monitor, you're charged based on the pricing published on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). If you choose to [create alerts with SQL insights](sql-insights-alerts.md), you're charged for any alert rules created and any notifications sent.
+For alert rules in Azure Monitor, you're charged based on the pricing published on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). If you choose to [create alerts with SQL Insights (preview)](sql-insights-alerts.md), you're charged for any alert rules created and any notifications sent.
## Supported versions
-SQL insights supports the following versions of SQL Server:
+SQL Insights (preview) supports the following versions of SQL Server:
- SQL Server 2012 and newer
-SQL insights supports SQL Server running in the following environments:
+SQL Insights (preview) supports SQL Server running in the following environments:
- Azure SQL Database - Azure SQL Managed Instance - SQL Server on Azure Virtual Machines (SQL Server running on virtual machines registered with the [SQL virtual machine](../../azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md) provider) - Azure VMs (SQL Server running on virtual machines not registered with the [SQL virtual machine](../../azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md) provider)
-SQL insights has no support or has limited support for the following:
+SQL Insights (preview) has no support or has limited support for the following:
- **Non-Azure instances**: SQL Server running on virtual machines outside Azure is not supported. - **Azure SQL Database elastic pools**: Metrics can't be gathered for elastic pools or for databases within elastic pools. - **Azure SQL Database low service tiers**: Metrics can't be gathered for databases on Basic, S0, S1, and S2 [service tiers](../../azure-sql/database/resource-limits-dtu-single-databases.md).
SQL insights has no support or has limited support for the following:
## Regional availability
-SQL Insights is available in all Azure regions where Azure Monitor is [available](https://azure.microsoft.com/global-infrastructure/services/?products=monitor), with the exception of Azure government and national clouds.
+SQL Insights (preview) is available in all Azure regions where Azure Monitor is [available](https://azure.microsoft.com/global-infrastructure/services/?products=monitor), with the exception of Azure government and national clouds.
-## Opening SQL insights
+## Open SQL Insights
-To open SQL insights:
+To open SQL Insights:
1. In the Azure portal, go to the **Azure Monitor** menu. 1. In the **Insights** section, select **SQL (preview)**. 1. Select a tile to load the experience for the SQL resource that you're monitoring.
-For more instructions, see [Enable SQL insights](sql-insights-enable.md) and [Troubleshoot SQL insights](sql-insights-troubleshoot.md).
+For more instructions, see [Enable SQL Insights (preview)](sql-insights-enable.md) and [Troubleshoot SQL Insights (preview)](sql-insights-troubleshoot.md).
## Collected data
-SQL insights performs all monitoring remotely. No agents are installed on the virtual machines running SQL Server.
+SQL Insights performs all monitoring remotely. No agents are installed on the virtual machines running SQL Server.
-SQL insights uses dedicated monitoring virtual machines to remotely collect data from your SQL resources. Each monitoring virtual machine has the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and the Workload Insights (WLI) extension installed.
+SQL Insights uses dedicated monitoring virtual machines to remotely collect data from your SQL resources. Each monitoring virtual machine has the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and the Workload Insights (WLI) extension installed.
-The WLI extension includes the open-source [Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/). SQL insights uses [data collection rules](../essentials/data-collection-rule-overview.md) to specify the data collection settings for Telegraf's [SQL Server plug-in](https://www.influxdata.com/integration/microsoft-sql-server/).
+The WLI extension includes the open-source [Telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/). SQL Insights uses [data collection rules](../essentials/data-collection-rule-overview.md) to specify the data collection settings for Telegraf's [SQL Server plug-in](https://www.influxdata.com/integration/microsoft-sql-server/).
Different sets of data are available for Azure SQL Database, Azure SQL Managed Instance, and SQL Server. The following tables describe the available data. You can customize which datasets to collect and the frequency of collection when you [create a monitoring profile](sql-insights-enable.md#create-sql-monitoring-profile).
The tables have the following columns:
- **Default collection frequency**: How often the data is collected by default. ### Data for Azure SQL Database + | Friendly name | Configuration name | Namespace | DMVs | Enabled by default | Default collection frequency | |:|:|:|:|:|:| | DB wait stats | AzureSQLDBWaitStats | sqlserver_azuredb_waitstats | sys.dm_db_wait_stats | No | Not applicable |
The tables have the following columns:
## Next steps -- For frequently asked questions about SQL insights, see [Frequently asked questions](../faq.yml).
+- For frequently asked questions about SQL Insights (preview), see [Frequently asked questions](../faq.yml).
+- [Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance](../../azure-sql/database/monitor-tune-overview.md)
azure-monitor Sql Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-troubleshoot.md
Title: Troubleshoot SQL insights (preview)
-description: Learn how to troubleshoot SQL insights in Azure Monitor.
+ Title: Troubleshoot SQL Insights (preview)
+description: Learn how to troubleshoot SQL Insights (preview) in Azure Monitor.
Last updated 4/19/2022
-# Troubleshoot SQL insights (preview)
-To troubleshoot data collection issues in SQL insights, check the status of the monitoring machine on the **Manage profile** tab. The statuses are:
+# Troubleshoot SQL Insights (preview)
+To troubleshoot data collection issues in SQL Insights (preview), check the status of the monitoring machine on the **Manage profile** tab. The statuses are:
- **Collecting** - **Not collecting**
The monitoring machine has a status of **Not collecting** if there's no data in
> [!NOTE] > Make sure that you're trying to collect data from a [supported version of SQL](sql-insights-overview.md#supported-versions). For example, trying to collect data with a valid profile and connection string but from an unsupported version of Azure SQL Database will result in a **Not collecting** status.
-SQL insights uses the following query to retrieve this information:
+SQL Insights (preview) uses the following query to retrieve this information:
```kusto InsightsMetrics
During preview of SQL Insights, you may encounter the following known issues.
## Next steps -- Get details on [enabling SQL insights](sql-insights-enable.md).
+- Get details on [enabling SQL Insights (preview)](sql-insights-enable.md).
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
Last updated 04/05/2022- # What is monitored by Azure Monitor?
The table below lists the available curated visualizations and more detailed inf
| [Azure Monitor Application Insights](./app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible Application Performance Management (APM) service which monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to a variety of development tools and integrates with Visual Studio to support your DevOps processes. | | [Azure Monitor Log Analytics Workspace](./logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). | | [Azure Service Bus Insights](../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
- | [Azure SQL insights](./insights/sql-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you are just setting up SQL monitoring, use this instead of the SQL Analytics solution. |
+ | [Azure SQL insights (preview)](./insights/sql-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you are just setting up SQL monitoring, use this instead of the SQL Analytics solution. |
| [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. | | [Azure Network Insights](./insights/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resource. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resource that are hosting your website, by simply searching for your website name. | | [Azure Monitor for Resource Groups](./insights/resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context as to the health and performance of the resource group as a whole. |
The following table lists Azure services and the data they collect into Azure Mo
| [Service Fabric](../service-fabric/index.yml) | Microsoft.ServiceFabric | No | No | [Service Fabric](../service-fabric/index.yml) | Agent required to monitor guest operating system and workflows.| | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/SignalR | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicesignalr) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsignalrservicesignalr) | | | | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/WebPubSub | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicewebpubsub) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsignalrservicewebpubsub) | | |
- | [Azure SQL Managed Instance](../azure-sql/database/monitoring-tuning-index.yml) | Microsoft.Sql/managedInstances | [**Yes**](./essentials/metrics-supported.md#microsoftsqlmanagedinstances) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsqlmanagedinstances) | [Azure SQL insights](./insights/sql-insights-overview.md) | |
- | [Azure SQL Database](../azure-sql/database/index.yml) | Microsoft.Sql/servers/databases | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserversdatabases) | No | [Azure SQL insights](./insights/sql-insights-overview.md) | |
- | [Azure SQL Database](../azure-sql/database/index.yml) | Microsoft.Sql/servers/elasticpools | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserverselasticpools) | No | [Azure SQL insights](./insights/sql-insights-overview.md) | |
+ | [Azure SQL Managed Instance](../azure-sql/database/monitoring-tuning-index.yml) | Microsoft.Sql/managedInstances | [**Yes**](./essentials/metrics-supported.md#microsoftsqlmanagedinstances) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsqlmanagedinstances) | [Azure SQL Insights (preview)](./insights/sql-insights-overview.md) | |
+ | [Azure SQL Database](../azure-sql/database/index.yml) | Microsoft.Sql/servers/databases | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserversdatabases) | No | [Azure SQL Insights (preview)](./insights/sql-insights-overview.md) | |
+ | [Azure SQL Database](../azure-sql/database/index.yml) | Microsoft.Sql/servers/elasticpools | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserverselasticpools) | No | [Azure SQL Insights (preview)](./insights/sql-insights-overview.md) | |
| [Azure Storage](../storage/index.yml) | Microsoft.Storage/storageAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccounts) | No | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | | [Azure Storage Blobs](../storage/blobs/index.yml) | Microsoft.Storage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsblobservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsblobservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | | [Azure Storage Files](../storage/files/index.yml) | Microsoft.Storage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsfileservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
azure-monitor Monitor Virtual Machine Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-workloads.md
Use [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBo
```kusto VMConnection
- | where Computer == <replace this with a computer name, e.g. ΓÇÿacme-demoΓÇÖ>
+ | where Computer == <replace this with a computer name, e.g. 'acme-demo'>
| extend bythehour = datetime_part("hour", TimeGenerated) | project bythehour, LinksFailed | summarize failCount = count() by bythehour
Use [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBo
```kusto VMConnection
- | where Computer == <replace this with a computer name, e.g. ΓÇÿacme-demoΓÇÖ>
+ | where Computer == <replace this with a computer name, e.g. 'acme-demo'>
| summarize dcount(LinksEstablished), dcount(LinksLive), dcount(LinksFailed), dcount(LinksTerminated) by bin(TimeGenerated, 1h) | render timechart ```
A synthetic transaction connects to an application or service running on a machi
## SQL Server
-Use [SQL insights](../insights/sql-insights-overview.md) to monitor SQL Server running on your virtual machines.
+Use [SQL Insights (preview)](../insights/sql-insights-overview.md) to monitor SQL Server running on your virtual machines.
## Next steps
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
**Updated articles** -- [Enable SQL insights (preview)](insights/sql-insights-enable.md)-- [Troubleshoot SQL insights (preview)](insights/sql-insights-troubleshoot.md)
+- [Enable SQL Insights (preview)](insights/sql-insights-enable.md)
+- [Troubleshoot SQL Insights (preview)](insights/sql-insights-troubleshoot.md)
### Logs
This article lists significant changes to Azure Monitor documentation.
- [Azure Data Explorer Insights](insights/data-explorer.md) - [Agent Health solution in Azure Monitor](insights/solution-agenthealth.md) - [Monitoring solutions in Azure Monitor](insights/solutions.md)-- [Monitor your SQL deployments with SQL insights (preview)](insights/sql-insights-overview.md)-- [Troubleshoot SQL insights (preview)](insights/sql-insights-troubleshoot.md)
+- [Monitor your SQL deployments with SQL Insights (preview)](insights/sql-insights-overview.md)
+- [Troubleshoot SQL Insights (preview)](insights/sql-insights-troubleshoot.md)
### Logs
This article lists significant changes to Azure Monitor documentation.
**Updated articles** -- [Enable SQL insights (preview)](insights/sql-insights-enable.md)
+- [Enable SQL Insights (preview)](insights/sql-insights-enable.md)
### Logs
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 04/11/2022 Last updated : 04/20/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files volumes are designed to be contained in a special purpose sub
Azure NetApp Files standard network features are supported for the following regions: * Australia Central
+* Australia Central 2
* East US 2 * France Central * Germany West Central
azure-resource-manager Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameters.md
description: Describes how to define parameters in a Bicep file.
Previously updated : 02/03/2022 Last updated : 04/20/2022 # Parameters in Bicep
The following table describes the available decorators and how to use them.
| [description](#description) | all | string | Text that explains how to use the parameter. The description is displayed to users through the portal. | | [maxLength](#length-constraints) | array, string | int | The maximum length for string and array parameters. The value is inclusive. | | [maxValue](#integer-constraints) | int | int | The maximum value for the integer parameter. This value is inclusive. |
-| metadata | all | object | Custom properties to apply to the parameter. Can include a description property that is equivalent to the description decorator. |
+| [metadata](#metadata) | all | object | Custom properties to apply to the parameter. Can include a description property that is equivalent to the description decorator. |
| [minLength](#length-constraints) | array, string | int | The minimum length for string and array parameters. The value is inclusive. | | [minValue](#integer-constraints) | int | int | The minimum value for the integer parameter. This value is inclusive. | | [secure](#secure-parameters) | string, object | none | Marks the parameter as secure. The value for a secure parameter isn't saved to the deployment history and isn't logged. For more information, see [Secure strings and objects](data-types.md#secure-strings-and-objects). |
param month int
### Description
-To help users understand the value to provide, add a description to the parameter. When deploying the template through the portal, the description's text is automatically used as a tip for that parameter. Only add a description when the text provides more information than can be inferred from the parameter name.
+To help users understand the value to provide, add a description to the parameter. When a user deploys the template through the portal, the description's text is automatically used as a tip for that parameter. Only add a description when the text provides more information than can be inferred from the parameter name.
```bicep @description('Must be at least Standard_A3 to support 2 NICs.')
When you hover your cursor over **storageAccountName** in VSCode, you see the fo
Make sure the text is well-formatted Markdown. Otherwise the text won't be rendered correctly.
+### Metadata
+
+If you have custom properties that you want to apply to a parameter, add a metadata decorator. Within the metadata, define an object with the custom names and values. The object you define for the metadata can contain properties of any name and type.
+
+You might use this decorator to track information about the parameter that doesn't make sense to add to the [description](#description).
+
+```bicep
+@description('Configuration values that are applied when the application starts.')
+@metadata({
+ source: 'database'
+ contact: 'Web team'
+})
+param settings object
+```
+ ## Use parameter To reference the value for a parameter, use the parameter name. The following example uses a parameter value for a key vault name.
azure-resource-manager Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/extension-resource-types.md
Title: Extension resource types description: Lists the Azure resource types are used to extend the capabilities of other resource types. Previously updated : 04/18/2022 Last updated : 04/19/2022 # Resource types that extend capabilities of other resources
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.Authorization * accessReviewHistoryDefinitions
-* batchResourceCheckAccess
* denyAssignments * eligibleChildResources * locks
An extension resource is a resource that adds to another resource's capabilities
* blueprintAssignments * blueprints
-## Microsoft.Capacity
-
-* listSkus
- ## Microsoft.ChangeAnalysis * changes
An extension resource is a resource that adds to another resource's capabilities
* BenefitRecommendations * BenefitUtilizationSummaries * Budgets
-* CheckNameAvailability
* Dimensions * Exports * ExternalSubscriptions * Forecast * GenerateDetailedCostReport * Insights
-* OperationResults
-* OperationStatus
* Pricesheets * Publish * Query
An extension resource is a resource that adds to another resource's capabilities
* networkManagerConnections
-## Microsoft.OperationalInsights
-
-* storageInsightConfigs
-
-## Microsoft.OperationsManagement
-
-* managementassociations
- ## Microsoft.PolicyInsights * attestations
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.Quota
-* operationsStatus
* quotaRequests * quotas * usages
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.RecoveryServices * backupProtectedItems
-* replicationEligibilityResults
## Microsoft.ResourceHealth
An extension resource is a resource that adds to another resource's capabilities
* bookmarks * cases * dataConnectors
-* dataConnectorsCheckRequirements
* enrichment * entities * entityQueryTemplates * fileImports * incidents
-* listrepositories
* metadata * MitreCoverageRecords * onboardingStates
An extension resource is a resource that adds to another resource's capabilities
* settings * sourceControls * threatIntelligence
-* watchlists
## Microsoft.SerialConsole
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 04/18/2022 Last updated : 04/19/2022 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.ContainerRegistry * registries/buildTasks
-* registries/buildTasks/listSourceRepositoryProperties
* registries/buildTasks/steps
-* registries/buildTasks/steps/listBuildArguments
* registries/eventGridFilters * registries/replications * registries/tasks
azure-resource-manager Deployment Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md
Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 02/18/2022 Last updated : 04/19/2022 # Deletion of Azure resources for complete mode deployments
The resources are listed by resource provider namespace. To match a resource pro
> [!NOTE] > Always use the [what-if operation](deploy-what-if.md) before deploying a template in complete mode. What-if shows you which resources will be created, deleted, or modified. Use what-if to avoid unintentionally deleting resources.
-Jump to a resource provider namespace:
-> [!div class="op_single_selector"]
-> - [Microsoft.AAD](#microsoftaad)
-> - [Microsoft.Addons](#microsoftaddons)
-> - [Microsoft.ADHybridHealthService](#microsoftadhybridhealthservice)
-> - [Microsoft.Advisor](#microsoftadvisor)
-> - [Microsoft.AgFoodPlatform](#microsoftagfoodplatform)
-> - [Microsoft.AlertsManagement](#microsoftalertsmanagement)
-> - [Microsoft.AnalysisServices](#microsoftanalysisservices)
-> - [Microsoft.AnyBuild](#microsoftanybuild)
-> - [Microsoft.ApiManagement](#microsoftapimanagement)
-> - [Microsoft.AppAssessment](#microsoftappassessment)
-> - [Microsoft.AppConfiguration](#microsoftappconfiguration)
-> - [Microsoft.AppPlatform](#microsoftappplatform)
-> - [Microsoft.Attestation](#microsoftattestation)
-> - [Microsoft.Authorization](#microsoftauthorization)
-> - [Microsoft.Automanage](#microsoftautomanage)
-> - [Microsoft.Automation](#microsoftautomation)
-> - [Microsoft.AVS](#microsoftavs)
-> - [Microsoft.Azure.Geneva](#microsoftazuregeneva)
-> - [Microsoft.AzureActiveDirectory](#microsoftazureactivedirectory)
-> - [Microsoft.AzureArcData](#microsoftazurearcdata)
-> - [Microsoft.AzureCIS](#microsoftazurecis)
-> - [Microsoft.AzureData](#microsoftazuredata)
-> - [Microsoft.AzurePercept](#microsoftazurepercept)
-> - [Microsoft.AzureSphere](#microsoftazuresphere)
-> - [Microsoft.AzureStack](#microsoftazurestack)
-> - [Microsoft.AzureStackHCI](#microsoftazurestackhci)
-> - [Microsoft.BackupSolutions](#microsoftbackupsolutions)
-> - [Microsoft.BareMetalInfrastructure](#microsoftbaremetalinfrastructure)
-> - [Microsoft.Batch](#microsoftbatch)
-> - [Microsoft.Billing](#microsoftbilling)
-> - [Microsoft.BillingBenefits](#microsoftbillingbenefits)
-> - [Microsoft.Blockchain](#microsoftblockchain)
-> - [Microsoft.BlockchainTokens](#microsoftblockchaintokens)
-> - [Microsoft.Blueprint](#microsoftblueprint)
-> - [Microsoft.BotService](#microsoftbotservice)
-> - [Microsoft.Cache](#microsoftcache)
-> - [Microsoft.Capacity](#microsoftcapacity)
-> - [Microsoft.Cascade](#microsoftcascade)
-> - [Microsoft.Cdn](#microsoftcdn)
-> - [Microsoft.CertificateRegistration](#microsoftcertificateregistration)
-> - [Microsoft.ChangeAnalysis](#microsoftchangeanalysis)
-> - [Microsoft.Chaos](#microsoftchaos)
-> - [Microsoft.ClassicCompute](#microsoftclassiccompute)
-> - [Microsoft.ClassicInfrastructureMigrate](#microsoftclassicinfrastructuremigrate)
-> - [Microsoft.ClassicNetwork](#microsoftclassicnetwork)
-> - [Microsoft.ClassicStorage](#microsoftclassicstorage)
-> - [Microsoft.ClusterStor](#microsoftclusterstor)
-> - [Microsoft.CodeSigning](#microsoftcodesigning)
-> - [Microsoft.Codespaces](#microsoftcodespaces)
-> - [Microsoft.CognitiveServices](#microsoftcognitiveservices)
-> - [Microsoft.Compute](#microsoftcompute)
-> - [Microsoft.Commerce](#microsoftcommerce)
-> - [Microsoft.Communication](#microsoftcommunication)
-> - [Microsoft.ConfidentialLedger](#microsoftconfidentialledger)
-> - [Microsoft.ConnectedCache](#microsoftconnectedcache)
-> - [Microsoft.ConnectedVehicle](#microsoftconnectedvehicle)
-> - [Microsoft.ConnectedVMwarevSphere](#microsoftconnectedvmwarevsphere)
-> - [Microsoft.Consumption](#microsoftconsumption)
-> - [Microsoft.ContainerInstance](#microsoftcontainerinstance)
-> - [Microsoft.ContainerRegistry](#microsoftcontainerregistry)
-> - [Microsoft.ContainerService](#microsoftcontainerservice)
-> - [Microsoft.CostManagement](#microsoftcostmanagement)
-> - [Microsoft.CustomerLockbox](#microsoftcustomerlockbox)
-> - [Microsoft.CustomProviders](#microsoftcustomproviders)
-> - [Microsoft.D365CustomerInsights](#microsoftd365customerinsights)
-> - [Microsoft.Dashboard](#microsoftdashboard)
-> - [Microsoft.DataBox](#microsoftdatabox)
-> - [Microsoft.DataBoxEdge](#microsoftdataboxedge)
-> - [Microsoft.Databricks](#microsoftdatabricks)
-> - [Microsoft.DataCatalog](#microsoftdatacatalog)
-> - [Microsoft.DataFactory](#microsoftdatafactory)
-> - [Microsoft.DataLakeAnalytics](#microsoftdatalakeanalytics)
-> - [Microsoft.DataLakeStore](#microsoftdatalakestore)
-> - [Microsoft.DataMigration](#microsoftdatamigration)
-> - [Microsoft.DataProtection](#microsoftdataprotection)
-> - [Microsoft.DataShare](#microsoftdatashare)
-> - [Microsoft.DBforMariaDB](#microsoftdbformariadb)
-> - [Microsoft.DBforMySQL](#microsoftdbformysql)
-> - [Microsoft.DBforPostgreSQL](#microsoftdbforpostgresql)
-> - [Microsoft.DelegatedNetwork](#microsoftdelegatednetwork)
-> - [Microsoft.DeploymentManager](#microsoftdeploymentmanager)
-> - [Microsoft.DesktopVirtualization](#microsoftdesktopvirtualization)
-> - [Microsoft.DevAI](#microsoftdevai)
-> - [Microsoft.Devices](#microsoftdevices)
-> - [Microsoft.DeviceUpdate](#microsoftdeviceupdate)
-> - [Microsoft.DevOps](#microsoftdevops)
-> - [Microsoft.DevSpaces](#microsoftdevspaces)
-> - [Microsoft.DevTestLab](#microsoftdevtestlab)
-> - [Microsoft.Diagnostics](#microsoftdiagnostics)
-> - [Microsoft.DigitalTwins](#microsoftdigitaltwins)
-> - [Microsoft.DocumentDB](#microsoftdocumentdb)
-> - [Microsoft.DomainRegistration](#microsoftdomainregistration)
-> - [Microsoft.DynamicsLcs](#microsoftdynamicslcs)
-> - [Microsoft.EdgeOrder](#microsoftedgeorder)
-> - [Microsoft.EnterpriseKnowledgeGraph](#microsoftenterpriseknowledgegraph)
-> - [Microsoft.EventGrid](#microsofteventgrid)
-> - [Microsoft.EventHub](#microsofteventhub)
-> - [Microsoft.Experimentation](#microsoftexperimentation)
-> - [Microsoft.Falcon](#microsoftfalcon)
-> - [Microsoft.Features](#microsoftfeatures)
-> - [Microsoft.Fidalgo](#microsoftfidalgo)
-> - [Microsoft.FluidRelay](#microsoftfluidrelay)
-> - [Microsoft.Gallery](#microsoftgallery)
-> - [Microsoft.Genomics](#microsoftgenomics)
-> - [Microsoft.Graph](#microsoftgraph)
-> - [Microsoft.GuestConfiguration](#microsoftguestconfiguration)
-> - [Microsoft.HanaOnAzure](#microsofthanaonazure)
-> - [Microsoft.HardwareSecurityModules](#microsofthardwaresecuritymodules)
-> - [Microsoft.HDInsight](#microsofthdinsight)
-> - [Microsoft.HealthBot](#microsofthealthbot)
-> - [Microsoft.HealthcareApis](#microsofthealthcareapis)
-> - [Microsoft.HpcWorkbench](#microsofthpcworkbench)
-> - [Microsoft.HybridCompute](#microsofthybridcompute)
-> - [Microsoft.HybridConnectivity](#microsofthybridconnectivity)
-> - [Microsoft.HybridContainerService](#microsofthybridcontainerservice)
-> - [Microsoft.HybridData](#microsofthybriddata)
-> - [Microsoft.HybridNetwork](#microsofthybridnetwork)
-> - [Microsoft.Hydra](#microsofthydra)
-> - [Microsoft.ImportExport](#microsoftimportexport)
-> - [Microsoft.Insights](#microsoftinsights)
-> - [Microsoft.Intune](#microsoftintune)
-> - [Microsoft.IoTCentral](#microsoftiotcentral)
-> - [Microsoft.IoTFirmwareDefense](#microsoftiotfirmwaredefense)
-> - [Microsoft.IoTSecurity](#microsoftiotsecurity)
-> - [Microsoft.IoTSpaces](#microsoftiotspaces)
-> - [Microsoft.KeyVault](#microsoftkeyvault)
-> - [Microsoft.Kubernetes](#microsoftkubernetes)
-> - [Microsoft.KubernetesConfiguration](#microsoftkubernetesconfiguration)
-> - [Microsoft.Kusto](#microsoftkusto)
-> - [Microsoft.LabServices](#microsoftlabservices)
-> - [Microsoft.LocationServices](#microsoftlocationservices)
-> - [Microsoft.Logic](#microsoftlogic)
-> - [Microsoft.MachineLearning](#microsoftmachinelearning)
-> - [Microsoft.MachineLearningServices](#microsoftmachinelearningservices)
-> - [Microsoft.Maintenance](#microsoftmaintenance)
-> - [Microsoft.ManagedIdentity](#microsoftmanagedidentity)
-> - [Microsoft.ManagedServices](#microsoftmanagedservices)
-> - [Microsoft.Management](#microsoftmanagement)
-> - [Microsoft.Maps](#microsoftmaps)
-> - [Microsoft.Marketplace](#microsoftmarketplace)
-> - [Microsoft.MarketplaceApps](#microsoftmarketplaceapps)
-> - [Microsoft.MarketplaceNotifications](#microsoftmarketplacenotifications)
-> - [Microsoft.MarketplaceOrdering](#microsoftmarketplaceordering)
-> - [Microsoft.Media](#microsoftmedia)
-> - [Microsoft.Migrate](#microsoftmigrate)
-> - [Microsoft.MixedReality](#microsoftmixedreality)
-> - [Microsoft.MobileNetwork](#microsoftmobilenetwork)
-> - [Microsoft.Monitor](#microsoftmonitor)
-> - [Microsoft.NetApp](#microsoftnetapp)
-> - [Microsoft.NetworkFunction](#microsoftnetworkfunction)
-> - [Microsoft.Network](#microsoftnetwork)
-> - [Microsoft.Notebooks](#microsoftnotebooks)
-> - [Microsoft.NotificationHubs](#microsoftnotificationhubs)
-> - [Microsoft.ObjectStore](#microsoftobjectstore)
-> - [Microsoft.OffAzure](#microsoftoffazure)
-> - [Microsoft.OpenEnergyPlatform](#microsoftopenenergyplatform)
-> - [Microsoft.OperationalInsights](#microsoftoperationalinsights)
-> - [Microsoft.OperationsManagement](#microsoftoperationsmanagement)
-> - [Microsoft.Peering](#microsoftpeering)
-> - [Microsoft.PlayFab](#microsoftplayfab)
-> - [Microsoft.PolicyInsights](#microsoftpolicyinsights)
-> - [Microsoft.Portal](#microsoftportal)
-> - [Microsoft.PowerBI](#microsoftpowerbi)
-> - [Microsoft.PowerBIDedicated](#microsoftpowerbidedicated)
-> - [Microsoft.PowerPlatform](#microsoftpowerplatform)
-> - [Microsoft.ProjectBabylon](#microsoftprojectbabylon)
-> - [Microsoft.ProviderHub](#microsoftproviderhub)
-> - [Microsoft.Purview](#microsoftpurview)
-> - [Microsoft.Quantum](#microsoftquantum)
-> - [Microsoft.Quota](#microsoftquota)
-> - [Microsoft.RecommendationsService](#microsoftrecommendationsservice)
-> - [Microsoft.RecoveryServices](#microsoftrecoveryservices)
-> - [Microsoft.RedHatOpenShift](#microsoftredhatopenshift)
-> - [Microsoft.Relay](#microsoftrelay)
-> - [Microsoft.ResourceConnector](#microsoftresourceconnector)
-> - [Microsoft.ResourceGraph](#microsoftresourcegraph)
-> - [Microsoft.ResourceHealth](#microsoftresourcehealth)
-> - [Microsoft.Resources](#microsoftresources)
-> - [Microsoft.SaaS](#microsoftsaas)
-> - [Microsoft.Scheduler](#microsoftscheduler)
-> - [Microsoft.Scom](#microsoftscom)
-> - [Microsoft.ScVmm](#microsoftscvmm)
-> - [Microsoft.Search](#microsoftsearch)
-> - [Microsoft.Security](#microsoftsecurity)
-> - [Microsoft.SecurityGraph](#microsoftsecuritygraph)
-> - [Microsoft.SecurityInsights](#microsoftsecurityinsights)
-> - [Microsoft.SerialConsole](#microsoftserialconsole)
-> - [Microsoft.ServiceBus](#microsoftservicebus)
-> - [Microsoft.ServiceFabric](#microsoftservicefabric)
-> - [Microsoft.ServiceFabricMesh](#microsoftservicefabricmesh)
-> - [Microsoft.ServiceLinker](#microsoftservicelinker)
-> - [Microsoft.Services](#microsoftservices)
-> - [Microsoft.SignalRService](#microsoftsignalrservice)
-> - [Microsoft.Singularity](#microsoftsingularity)
-> - [Microsoft.SoftwarePlan](#microsoftsoftwareplan)
-> - [Microsoft.Solutions](#microsoftsolutions)
-> - [Microsoft.SQL](#microsoftsql)
-> - [Microsoft.SqlVirtualMachine](#microsoftsqlvirtualmachine)
-> - [Microsoft.Storage](#microsoftstorage)
-> - [Microsoft.StorageCache](#microsoftstoragecache)
-> - [Microsoft.StorageReplication](#microsoftstoragereplication)
-> - [Microsoft.StorageSync](#microsoftstoragesync)
-> - [Microsoft.StorSimple](#microsoftstorsimple)
-> - [Microsoft.StreamAnalytics](#microsoftstreamanalytics)
-> - [Microsoft.Subscription](#microsoftsubscription)
-> - [Microsoft.Synapse](#microsoftsynapse)
-> - [Microsoft.TestBase](#microsofttestbase)
-> - [Microsoft.TimeSeriesInsights](#microsofttimeseriesinsights)
-> - [Microsoft.VideoIndexer](#microsoftvideoindexer)
-> - [Microsoft.VirtualMachineImages](#microsoftvirtualmachineimages)
-> - [Microsoft.VMware](#microsoftvmware)
-> - [Microsoft.VMwareCloudSimple](#microsoftvmwarecloudsimple)
-> - [Microsoft.VSOnline](#microsoftvsonline)
-> - [Microsoft.Web](#microsoftweb)
-> - [Microsoft.WindowsDefenderATP](#microsoftwindowsdefenderatp)
-> - [Microsoft.WindowsESU](#microsoftwindowsesu)
-> - [Microsoft.WindowsIoT](#microsoftwindowsiot)
-> - [Microsoft.WorkloadBuilder](#microsoftworkloadbuilder)
-> - [Microsoft.WorkloadMonitor](#microsoftworkloadmonitor)
-> - [Microsoft.Workloads](#microsoftworkloads)
## Microsoft.AAD
Jump to a resource provider namespace:
> | DomainServices | Yes | > | DomainServices / oucontainer | No |
-## Microsoft.Addons
+## microsoft.aadiam
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | supportProviders | No |
+> | azureADMetrics | Yes |
+> | diagnosticSettings | No |
+> | diagnosticSettingsCategories | No |
+> | privateLinkForAzureAD | Yes |
+> | tenants | Yes |
## Microsoft.ADHybridHealthService
Jump to a resource provider namespace:
> | - | -- | > | actionRules | Yes | > | alerts | No |
-> | alertsList | No |
> | alertsMetaData | No |
-> | alertsSummary | No |
-> | alertsSummaryList | No |
> | migrateFromSmartDetection | No | > | prometheusRuleGroups | Yes |
-> | resourceHealthAlertRules | Yes |
> | smartDetectorAlertRules | Yes | > | smartGroups | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | clusters | No |
+> | clusters | Yes |
## Microsoft.ApiManagement
Jump to a resource provider namespace:
> | service / eventGridFilters | No | > | validateServiceName | No |
+## Microsoft.App
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | containerApps | Yes |
+> | managedEnvironments | Yes |
+> | managedEnvironments / certificates | Yes |
+ ## Microsoft.AppAssessment > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | migrateProjects | No |
+> | migrateProjects | Yes |
> | migrateProjects / assessments | No | > | migrateProjects / assessments / assessedApplications | No | > | migrateProjects / assessments / assessedApplications / machines | No |
Jump to a resource provider namespace:
> | configurationStores | Yes | > | configurationStores / eventGridFilters | No | > | configurationStores / keyValues | No |
+> | configurationStores / replicas | No |
> | deletedConfigurationStores | No | ## Microsoft.AppPlatform
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | accessReviewScheduleDefinitions | No |
-> | accessReviewScheduleSettings | No |
-> | batchResourceCheckAccess | No |
+> | accessReviewHistoryDefinitions | No |
> | classicAdministrators | No | > | dataAliases | No | > | dataPolicyManifests | No |
Jump to a resource provider namespace:
> | diagnosticSettingsCategories | No | > | elevateAccess | No | > | eligibleChildResources | No |
-> | findOrphanRoleAssignments | No |
> | locks | No |
-> | permissions | No |
> | policyAssignments | No | > | policyDefinitions | No | > | policyExemptions | No | > | policySetDefinitions | No | > | privateLinkAssociations | No |
-> | providerOperations | No |
> | resourceManagementPrivateLinks | Yes | > | roleAssignmentApprovals | No | > | roleAssignments | No | > | roleAssignmentScheduleInstances | No | > | roleAssignmentScheduleRequests | No | > | roleAssignmentSchedules | No |
-> | roleAssignmentsUsageMetrics | No |
> | roleDefinitions | No | > | roleEligibilityScheduleInstances | No | > | roleEligibilityScheduleRequests | No |
Jump to a resource provider namespace:
> | configurationProfilePreferences | Yes | > | configurationProfiles | Yes | > | configurationProfiles / versions | Yes |
+> | patchJobConfigurations | Yes |
+> | patchJobConfigurations / patchJobs | No |
+> | patchTiers | Yes |
+> | servicePrincipals | No |
## Microsoft.Automation
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | automationAccounts | Yes |
+> | automationAccounts / agentRegistrationInformation | No |
> | automationAccounts / configurations | Yes | > | automationAccounts / hybridRunbookWorkerGroups | No | > | automationAccounts / hybridRunbookWorkerGroups / hybridRunbookWorkers | No |
Jump to a resource provider namespace:
> | automationAccounts / privateEndpointConnections | No | > | automationAccounts / privateLinkResources | No | > | automationAccounts / runbooks | Yes |
+> | automationAccounts / softwareUpdateConfigurationMachineRuns | No |
+> | automationAccounts / softwareUpdateConfigurationRuns | No |
> | automationAccounts / softwareUpdateConfigurations | No | > | automationAccounts / webhooks | No |
+> | deletedAutomationAccounts | No |
+
+## Microsoft.AutonomousDevelopmentPlatform
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | accounts | Yes |
+> | accounts / datapools | No |
+
+## Microsoft.AutonomousSystems
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | workspaces | Yes |
+> | workspaces / validateCreateRequest | No |
## Microsoft.AVS
Jump to a resource provider namespace:
> | privateClouds / workloadNetworks / virtualMachines | No | > | privateClouds / workloadNetworks / vmGroups | No |
-## Microsoft.Azure.Geneva
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | environments | No |
-> | environments / accounts | No |
-> | environments / accounts / namespaces | No |
-> | environments / accounts / namespaces / configurations | No |
- ## Microsoft.AzureActiveDirectory > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | DataControllers | No |
-> | PostgresInstances | No |
-> | SqlManagedInstances | No |
-> | SqlServerInstances | No |
+> | DataControllers | Yes |
+> | DataControllers / ActiveDirectoryConnectors | No |
+> | PostgresInstances | Yes |
+> | sqlManagedInstances | Yes |
+> | SqlServerInstances | Yes |
## Microsoft.AzureCIS > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | autopilotEnvironments | No |
-> | dstsServiceAccounts | No |
-> | dstsServiceClientIdentities | No |
+> | autopilotEnvironments | Yes |
+> | dstsServiceAccounts | Yes |
+> | dstsServiceClientIdentities | Yes |
## Microsoft.AzureData
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | accounts | No |
+> | accounts | Yes |
> | accounts / devices | No | > | accounts / devices / sensors | No | > | accounts / solutioninstances | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | catalogs | No |
+> | catalogs | Yes |
> | catalogs / certificates | No | > | catalogs / deployments | No | > | catalogs / devices | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | clusters | No |
+> | clusters | Yes |
> | clusters / arcSettings | No | > | clusters / arcSettings / extensions | No |
-> | galleryimages | No |
-> | networkinterfaces | No |
-> | virtualharddisks | No |
-> | virtualmachines | No |
-> | virtualmachines / extensions | No |
+> | galleryimages | Yes |
+> | networkinterfaces | Yes |
+> | virtualharddisks | Yes |
+> | virtualmachines | Yes |
+> | virtualmachines / extensions | Yes |
> | virtualmachines / hybrididentitymetadata | No |
-> | virtualnetworks | No |
+> | virtualnetworks | Yes |
## Microsoft.BackupSolutions
Jump to a resource provider namespace:
> | billingAccounts / billingProfiles / invoiceSections / transactions | No | > | billingAccounts / billingProfiles / invoiceSections / transfers | No | > | billingAccounts / billingProfiles / invoiceSections / validateDeleteInvoiceSectionEligibility | No |
-> | billingAccounts / BillingProfiles / patchOperations | No |
> | billingAccounts / billingProfiles / paymentMethodLinks | No | > | billingAccounts / billingProfiles / paymentMethods | No | > | billingAccounts / billingProfiles / policies | No | > | billingAccounts / billingProfiles / pricesheet | No |
-> | billingAccounts / billingProfiles / pricesheetDownloadOperations | No |
> | billingAccounts / billingProfiles / products | No | > | billingAccounts / billingProfiles / reservations | No | > | billingAccounts / billingProfiles / transactions | No |
Jump to a resource provider namespace:
> | billingAccounts / billingSubscriptions / elevateRole | No | > | billingAccounts / billingSubscriptions / invoices | No | > | billingAccounts / createBillingRoleAssignment | No |
-> | billingAccounts / createInvoiceSectionOperations | No |
> | billingAccounts / customers | No | > | billingAccounts / customers / billingPermissions | No | > | billingAccounts / customers / billingSubscriptions | No |
Jump to a resource provider namespace:
> | billingAccounts / invoices / transactions | No | > | billingAccounts / invoices / transactionSummary | No | > | billingAccounts / invoiceSections | No |
-> | billingAccounts / invoiceSections / billingSubscriptionMoveOperations | No |
> | billingAccounts / invoiceSections / billingSubscriptions | No | > | billingAccounts / invoiceSections / billingSubscriptions / transfer | No | > | billingAccounts / invoiceSections / elevate | No | > | billingAccounts / invoiceSections / initiateTransfer | No |
-> | billingAccounts / invoiceSections / patchOperations | No |
-> | billingAccounts / invoiceSections / productMoveOperations | No |
> | billingAccounts / invoiceSections / products | No | > | billingAccounts / invoiceSections / products / transfer | No | > | billingAccounts / invoiceSections / products / updateAutoRenew | No | > | billingAccounts / invoiceSections / transactions | No | > | billingAccounts / invoiceSections / transfers | No | > | billingAccounts / lineOfCredit | No |
-> | billingAccounts / patchOperations | No |
> | billingAccounts / payableOverage | No | > | billingAccounts / paymentMethods | No | > | billingAccounts / payNow | No |
Jump to a resource provider namespace:
> | transfers | No | > | transfers / acceptTransfer | No | > | transfers / declineTransfer | No |
-> | transfers / operationStatus | No |
> | transfers / validateTransfer | No | > | validateAddress | No |
Jump to a resource provider namespace:
> | savingsPlans | No | > | validate | No |
-## Microsoft.Blockchain
+## Microsoft.Bing
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | blockchainMembers | Yes |
+> | accounts | Yes |
+> | accounts / usages | No |
+> | registeredSubscriptions | No |
## Microsoft.BlockchainTokens
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | blueprintAssignments | No |
-> | blueprintAssignments / assignmentOperations | No |
-> | blueprintAssignments / operations | No |
> | blueprints | No | > | blueprints / artifacts | No | > | blueprints / versions | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | sites | No |
+> | sites | Yes |
## Microsoft.Cdn
Jump to a resource provider namespace:
> | profiles / secrets | No | > | profiles / securitypolicies | No | > | validateProbe | No |
+> | validateSecret | No |
## Microsoft.CertificateRegistration
Jump to a resource provider namespace:
> | changeSnapshots | No | > | computeChanges | No | > | profile | No |
-> | resourceChanges | No |
## Microsoft.Chaos
Jump to a resource provider namespace:
> | storageAccounts / vmImages | No | > | vmImages | No |
-## Microsoft.ClusterStor
+## Microsoft.CloudTest
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | nodes | Yes |
+> | accounts | Yes |
+> | hostedpools | Yes |
+> | images | Yes |
+> | pools | Yes |
## Microsoft.CodeSigning > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | codeSigningAccounts | No |
+> | codeSigningAccounts | Yes |
> | codeSigningAccounts / certificateProfiles | No | ## Microsoft.Codespaces
Jump to a resource provider namespace:
> | accounts / privateLinkResources | No | > | deletedAccounts | No |
+## Microsoft.Commerce
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | RateCard | No |
+> | UsageAggregates | No |
+
+## Microsoft.Communication
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | CommunicationServices | Yes |
+> | CommunicationServices / eventGridFilters | No |
+> | EmailServices | Yes |
+> | EmailServices / Domains | Yes |
+> | registeredSubscriptions | No |
+ ## Microsoft.Compute > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | diskEncryptionSets | Yes | > | disks | Yes | > | galleries | Yes |
-> | galleries / applications | No |
-> | galleries / applications / versions | No |
+> | galleries / applications | Yes |
+> | galleries / applications / versions | Yes |
> | galleries / images | Yes | > | galleries / images / versions | Yes | > | hostGroups | Yes |
Jump to a resource provider namespace:
> | restorePointCollections / restorePoints | No | > | restorePointCollections / restorePoints / diskRestorePoints | No | > | sharedVMExtensions | Yes |
-> | sharedVMExtensions / versions | No |
+> | sharedVMExtensions / versions | Yes |
> | sharedVMImages | Yes |
-> | sharedVMImages / versions | No |
+> | sharedVMImages / versions | Yes |
> | snapshots | Yes | > | sshPublicKeys | Yes | > | virtualMachines | Yes |
Jump to a resource provider namespace:
> | virtualMachineScaleSets / virtualMachines / extensions | No | > | virtualMachineScaleSets / virtualMachines / networkInterfaces | No |
-## Microsoft.Commerce
+## Microsoft.ConfidentialLedger
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | RateCard | No |
-> | UsageAggregates | No |
+> | Ledgers | Yes |
-## Microsoft.Communication
+## Microsoft.Confluent
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | CommunicationServices | No |
-> | CommunicationServices / eventGridFilters | No |
-> | EmailServices | No |
-> | EmailServices / Domains | No |
-> | registeredSubscriptions | No |
+> | agreements | No |
+> | organizations | Yes |
+> | validations | No |
-## Microsoft.ConfidentialLedger
+## Microsoft.ConnectedCache
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | Ledgers | No |
+> | CacheNodes | Yes |
+> | enterpriseCustomers | Yes |
-## Microsoft.ConnectedCache
+## microsoft.connectedopenstack
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | CacheNodes | No |
-> | enterpriseCustomers | No |
+> | flavors | Yes |
+> | heatStacks | Yes |
+> | heatStackTemplates | Yes |
+> | images | Yes |
+> | keypairs | Yes |
+> | networkPorts | Yes |
+> | networks | Yes |
+> | openStackIdentities | Yes |
+> | securityGroupRules | Yes |
+> | securityGroups | Yes |
+> | subnets | Yes |
+> | virtualMachines | Yes |
+> | volumes | Yes |
+> | volumeSnapshots | Yes |
+> | volumeTypes | Yes |
## Microsoft.ConnectedVehicle > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | platformAccounts | No |
+> | platformAccounts | Yes |
> | registeredSubscriptions | No | ## Microsoft.ConnectedVMwarevSphere
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | Clusters | No |
-> | Datastores | No |
-> | Hosts | No |
-> | ResourcePools | No |
-> | VCenters | No |
+> | Clusters | Yes |
+> | Datastores | Yes |
+> | Hosts | Yes |
+> | ResourcePools | Yes |
+> | VCenters | Yes |
> | VCenters / InventoryItems | No |
-> | VirtualMachines | No |
+> | VirtualMachines | Yes |
> | VirtualMachines / Extensions | Yes | > | VirtualMachines / GuestAgents | No | > | VirtualMachines / HybridIdentityMetadata | No |
-> | VirtualMachineTemplates | No |
-> | VirtualNetworks | No |
+> | VirtualMachineTemplates | Yes |
+> | VirtualNetworks | Yes |
## Microsoft.Consumption
Jump to a resource provider namespace:
> | ReservationRecommendations | No | > | ReservationSummaries | No | > | ReservationTransactions | No |
-> | Tags | No |
-> | tenants | No |
-> | Terms | No |
-> | UsageDetails | No |
## Microsoft.ContainerInstance
Jump to a resource provider namespace:
> | containerServices | Yes | > | managedClusters | Yes | > | ManagedClusters / eventGridFilters | No |
+> | managedclustersnapshots | Yes |
> | openShiftManagedClusters | Yes | > | snapshots | Yes |
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | Alerts | No |
+> | BenefitRecommendations | No |
> | BenefitUtilizationSummaries | No | > | BillingAccounts | No | > | Budgets | No |
-> | calculatePrice | No |
> | CloudConnectors | No | > | Connectors | Yes |
-> | costAllocationRules | No |
> | Departments | No | > | Dimensions | No | > | EnrollmentAccounts | No |
Jump to a resource provider namespace:
> | ExternalSubscriptions / Dimensions | No | > | ExternalSubscriptions / Forecast | No | > | ExternalSubscriptions / Query | No |
+> | fetchMarketplacePrices | No |
> | fetchPrices | No | > | Forecast | No | > | GenerateDetailedCostReport | No |
-> | GenerateReservationDetailsReport | No |
> | Insights | No |
+> | Pricesheets | No |
+> | Publish | No |
> | Query | No | > | register | No | > | Reportconfigs | No | > | Reports | No | > | ScheduledActions | No | > | Settings | No |
-> | showbackRules | No |
> | Views | No | ## Microsoft.CustomerLockbox
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | grafana | No |
+> | grafana | Yes |
## Microsoft.DataBox
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | accessConnectors | Yes |
> | workspaces | Yes | > | workspaces / dbWorkspaces | No | > | workspaces / virtualNetworkPeerings | No |
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | catalogs | Yes |
+> | datacatalogs | Yes |
+
+## Microsoft.DataCollaboration
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | workspaces | Yes |
+> | workspaces / constrainedResources | No |
+> | workspaces / contracts | No |
+> | workspaces / contracts / entitlements | No |
+> | workspaces / dataAssets | No |
+> | workspaces / dataAssets / dataSets | No |
+> | workspaces / pipelineRuns | No |
+> | workspaces / pipelineRuns / pipelineStepRuns | No |
+> | workspaces / pipelines | No |
+> | workspaces / pipelines / pipelineSteps | No |
+> | workspaces / pipelines / runs | No |
+> | workspaces / proposals | No |
+> | workspaces / proposals / dataAssetReferences | No |
+> | workspaces / proposals / entitlements | No |
+> | workspaces / proposals / entitlements / constraints | No |
+> | workspaces / proposals / entitlements / policies | No |
+> | workspaces / proposals / invitations | No |
+> | workspaces / proposals / scriptReferences | No |
+> | workspaces / resourceReferences | No |
+> | workspaces / scripts | No |
+> | workspaces / scripts / scriptrevisions | No |
+
+## Microsoft.Datadog
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | agreements | No |
+> | monitors | Yes |
+> | monitors / getDefaultKey | No |
+> | monitors / refreshSetPasswordLink | No |
+> | monitors / setDefaultKey | No |
+> | monitors / singleSignOnConfigurations | No |
+> | monitors / tagRules | No |
+> | registeredSubscriptions | No |
## Microsoft.DataFactory
Jump to a resource provider namespace:
> | DatabaseMigrations | No | > | services | Yes | > | services / projects | Yes |
+> | slots | Yes |
> | SqlMigrationServices | Yes | ## Microsoft.DataProtection
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | backupInstances | No |
> | BackupVaults | Yes | > | ResourceGuards | Yes |
+## Microsoft.DataReplication
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | replicationFabrics | Yes |
+> | replicationVaults | Yes |
+ ## Microsoft.DataShare > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | instances | No |
-> | instances / experiments | No |
-> | instances / sandboxes | No |
-> | instances / sandboxes / experiments | No |
+> | instances | Yes |
+> | instances / experiments | Yes |
+> | instances / sandboxes | Yes |
+> | instances / sandboxes / experiments | Yes |
## Microsoft.Devices
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | accounts | No |
-> | accounts / instances | No |
+> | accounts | Yes |
+> | accounts / instances | Yes |
> | accounts / privateEndpointConnectionProxies | No | > | accounts / privateEndpointConnections | No | > | accounts / privateLinkResources | No |
Jump to a resource provider namespace:
> | - | -- | > | pipelines | Yes |
-## Microsoft.DevSpaces
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | controllers | Yes |
- ## Microsoft.DevTestLab > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | AzureKB | No |
-> | InsightDiagnostics | No |
+> | apollo | No |
+> | azureKB | No |
+> | insights | No |
> | solutions | No | ## Microsoft.DigitalTwins
Jump to a resource provider namespace:
> | topLevelDomains | No | > | validateDomainRegistrationInformation | No |
-## Microsoft.DynamicsLcs
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | lcsprojects | No |
-> | lcsprojects / clouddeployments | No |
-> | lcsprojects / connectors | No |
- ## Microsoft.EdgeOrder > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | orders | No | > | productFamiliesMetadata | No |
-## Microsoft.EnterpriseKnowledgeGraph
+## Microsoft.Elastic
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | services | Yes |
+> | monitors | Yes |
+> | monitors / tagRules | No |
## Microsoft.EventGrid
Jump to a resource provider namespace:
> | - | -- | > | domains | Yes | > | domains / topics | No |
-> | eventSubscriptions | No |
-> | extensionTopics | No |
+> | partnerConfigurations | Yes |
> | partnerDestinations | Yes | > | partnerNamespaces | Yes | > | partnerNamespaces / channels | No |
Jump to a resource provider namespace:
> | systemTopics / eventSubscriptions | No | > | topics | Yes | > | topicTypes | No |
+> | verifiedPartners | No |
## Microsoft.EventHub
Jump to a resource provider namespace:
> | - | -- | > | clusters | Yes | > | namespaces | Yes |
+> | namespaces / applicationGroups | No |
> | namespaces / authorizationrules | No | > | namespaces / disasterrecoveryconfigs | No | > | namespaces / eventhubs | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | devcenters | No |
+> | devcenters | Yes |
+> | devcenters / attachednetworks | No |
> | devcenters / catalogs | No | > | devcenters / catalogs / items | No |
+> | devcenters / devboxdefinitions | Yes |
> | devcenters / environmentTypes | No |
+> | devcenters / galleries | No |
+> | devcenters / galleries / images | No |
+> | devcenters / galleries / images / versions | No |
+> | devcenters / images | No |
> | devcenters / mappings | No |
-> | machinedefinitions | No |
-> | networksettings | No |
-> | networksettings / healthchecks | No |
-> | projects | No |
+> | machinedefinitions | Yes |
+> | networksettings | Yes |
+> | projects | Yes |
+> | projects / attachednetworks | No |
> | projects / catalogItems | No |
-> | projects / environments | No |
+> | projects / devboxdefinitions | No |
+> | projects / environments | Yes |
> | projects / environments / deployments | No | > | projects / environmentTypes | No |
-> | projects / pools | No |
+> | projects / pools | Yes |
## Microsoft.FluidRelay > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | fluidRelayServers | No |
+> | fluidRelayServers | Yes |
> | fluidRelayServers / fluidRelayContainers | No |
-## Microsoft.Gallery
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | enroll | No |
-> | galleryitems | No |
-> | generateartifactaccessuri | No |
-> | myareas | No |
-> | myareas / areas | No |
-> | myareas / areas / areas | No |
-> | myareas / areas / areas / galleryitems | No |
-> | myareas / areas / galleryitems | No |
-> | myareas / galleryitems | No |
-> | register | No |
-> | resources | No |
-> | retrieveresourcesbyid | No |
-
-## Microsoft.Genomics
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | accounts | Yes |
-
-## Microsoft.Graph
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | AzureAdApplication | No |
- ## Microsoft.GuestConfiguration > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | autoManagedAccounts | Yes |
-> | autoManagedVmConfigurationProfiles | Yes |
-> | configurationProfileAssignments | No |
> | guestConfigurationAssignments | No |
-> | software | No |
-> | softwareUpdateProfile | No |
-> | softwareUpdates | No |
## Microsoft.HanaOnAzure
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | healthBots | No |
+> | healthBots | Yes |
## Microsoft.HealthcareApis
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | instances | No |
-> | instances / chambers | No |
-> | instances / chambers / accessProfiles | No |
-> | instances / chambers / workloads | No |
-> | instances / consortiums | No |
+> | instances | Yes |
+> | instances / chambers | Yes |
+> | instances / chambers / accessProfiles | Yes |
+> | instances / chambers / workloads | Yes |
+> | instances / consortiums | Yes |
## Microsoft.HybridCompute
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | provisionedClusters | No |
-> | provisionedClusters / agentPools | No |
+> | provisionedClusters | Yes |
+> | provisionedClusters / agentPools | Yes |
> | provisionedClusters / hybridIdentityMetadata | No | ## Microsoft.HybridData
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | devices | No |
-> | networkFunctions | No |
+> | devices | Yes |
+> | networkFunctions | Yes |
> | networkFunctionVendors | No | > | registeredSubscriptions | No | > | vendors | No |
-> | vendors / vendorSkus | No |
-> | vendors / vendorSkus / previewSubscriptions | No |
-## Microsoft.Hydra
+## Microsoft.ImportExport
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | components | Yes |
-> | networkScopes | Yes |
+> | jobs | Yes |
-## Microsoft.ImportExport
+## Microsoft.IndustryDataLifecycle
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | jobs | Yes |
-## Microsoft.Insights
+> | baseModels | Yes |
+> | baseModels / entities | No |
+> | baseModels / relationships | No |
+> | builtInModels | No |
+> | builtInModels / entities | No |
+> | builtInModels / relationships | No |
+> | collaborativeInvitations | No |
+> | custodianCollaboratives | Yes |
+> | custodianCollaboratives / collaborativeImage | No |
+> | custodianCollaboratives / dataModels | No |
+> | custodianCollaboratives / dataModels / mergePipelines | No |
+> | custodianCollaboratives / invitations | No |
+> | custodianCollaboratives / invitations / termsOfUseDocuments | No |
+> | custodianCollaboratives / receivedDataPackages | No |
+> | custodianCollaboratives / termsOfUseDocuments | No |
+> | dataConsumerCollaboratives | Yes |
+> | dataproviders | No |
+> | derivedModels | Yes |
+> | derivedModels / entities | No |
+> | derivedModels / relationships | No |
+> | generateMappingTemplate | No |
+> | memberCollaboratives | Yes |
+> | memberCollaboratives / sharedDataPackages | No |
+> | modelMappings | Yes |
+> | pipelineSets | Yes |
+
+## microsoft.insights
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | actionGroups | Yes |
+> | actiongroups | Yes |
> | activityLogAlerts | Yes | > | alertrules | Yes | > | autoscalesettings | Yes | > | components | Yes |
+> | components / aggregate | No |
> | components / analyticsItems | No |
+> | components / annotations | No |
+> | components / api | No |
+> | components / apiKeys | No |
+> | components / currentBillingFeatures | No |
+> | components / defaultWorkItemConfig | No |
+> | components / events | No |
+> | components / exportConfiguration | No |
+> | components / extendQueries | No |
> | components / favorites | No |
-> | components / linkedStorageAccounts | No |
+> | components / featureCapabilities | No |
+> | components / generateDiagnosticServiceReadOnlyToken | No |
+> | components / generateDiagnosticServiceReadWriteToken | No |
+> | components / linkedstorageaccounts | No |
+> | components / metadata | No |
+> | components / metricDefinitions | No |
+> | components / metrics | No |
+> | components / move | No |
> | components / myAnalyticsItems | No |
+> | components / myFavorites | No |
> | components / pricingPlans | No |
-> | components / ProactiveDetectionConfigs | No |
-> | dataCollectionEndpoints | No |
+> | components / proactiveDetectionConfigs | No |
+> | components / purge | No |
+> | components / query | No |
+> | components / quotaStatus | No |
+> | components / webtests | No |
+> | components / workItemConfigs | No |
+> | createnotifications | No |
+> | dataCollectionEndpoints | Yes |
+> | dataCollectionEndpoints / networkSecurityPerimeterAssociationProxies | No |
+> | dataCollectionEndpoints / networkSecurityPerimeterConfigurations | No |
+> | dataCollectionEndpoints / scopedPrivateLinkProxies | No |
> | dataCollectionRuleAssociations | No | > | dataCollectionRules | Yes | > | diagnosticSettings | No |
+> | diagnosticSettingsCategories | No |
+> | eventCategories | No |
+> | eventtypes | No |
+> | extendedDiagnosticSettings | No |
+> | generateDiagnosticServiceReadOnlyToken | No |
+> | generateDiagnosticServiceReadWriteToken | No |
> | guestDiagnosticSettings | Yes |
-> | guestDiagnosticSettingsAssociation | Yes |
-> | logprofiles | Yes |
-> | metricAlerts | Yes |
+> | guestDiagnosticSettingsAssociation | No |
+> | logDefinitions | No |
+> | logprofiles | No |
+> | logs | No |
+> | metricalerts | Yes |
+> | metricbaselines | No |
+> | metricbatch | No |
+> | metricDefinitions | No |
+> | metricNamespaces | No |
+> | metrics | No |
+> | migratealertrules | No |
+> | migrateToNewPricingModel | No |
+> | monitoredObjects | No |
> | myWorkbooks | No |
+> | notificationgroups | Yes |
+> | notificationstatus | No |
> | privateLinkScopes | Yes |
+> | privateLinkScopes / privateEndpointConnectionProxies | No |
> | privateLinkScopes / privateEndpointConnections | No | > | privateLinkScopes / scopedResources | No |
-> | queryPacks | Yes |
-> | queryPacks / queries | No |
-> | scheduledQueryRules | Yes |
+> | rollbackToLegacyPricingModel | No |
+> | scheduledqueryrules | Yes |
+> | topology | No |
+> | transactions | No |
> | webtests | Yes |
+> | webtests / getTestResultFile | No |
> | workbooks | Yes | > | workbooktemplates | Yes |
-## Microsoft.Intune
+## Microsoft.IntelligentITDigitalTwin
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | diagnosticSettings | No |
-> | diagnosticSettingsCategories | No |
+> | digitalTwins | Yes |
+> | digitalTwins / assets | Yes |
+> | digitalTwins / executionPlans | Yes |
+> | digitalTwins / testPlans | Yes |
+> | digitalTwins / tests | Yes |
## Microsoft.IoTCentral
Jump to a resource provider namespace:
> | sensors | No | > | sites | No |
-## Microsoft.IoTSpaces
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | Graph | Yes |
- ## Microsoft.KeyVault > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | hsmPools | Yes | > | managedHSMs | Yes | > | vaults | Yes |
-> | vaults / accessPolicies | Yes |
+> | vaults / accessPolicies | No |
> | vaults / eventGridFilters | No | > | vaults / keys | No | > | vaults / keys / versions | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | connectedClusters | No |
+> | connectedClusters | Yes |
> | registeredSubscriptions | No | ## Microsoft.KubernetesConfiguration
Jump to a resource provider namespace:
> | extensions | No | > | fluxConfigurations | No | > | namespaces | No |
+> | privateLinkScopes | Yes |
+> | privateLinkScopes / privateEndpointConnectionProxies | No |
+> | privateLinkScopes / privateEndpointConnections | No |
> | sourceControlConfigurations | No | ## Microsoft.Kusto
Jump to a resource provider namespace:
> | labs | Yes | > | users | No |
-## Microsoft.LocationServices
+## Microsoft.LoadTestService
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | accounts | Yes |
+> | loadtests | Yes |
## Microsoft.Logic
Jump to a resource provider namespace:
> | isolatedEnvironments | Yes | > | workflows | Yes |
+## Microsoft.Logz
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | monitors | Yes |
+> | monitors / accounts | Yes |
+> | monitors / accounts / tagRules | No |
+> | monitors / metricsSource | Yes |
+> | monitors / metricsSource / tagRules | No |
+> | monitors / singleSignOnConfigurations | No |
+> | monitors / tagRules | No |
+> | registeredSubscriptions | No |
+ ## Microsoft.MachineLearning > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | aisysteminventories | Yes |
+> | registries | Yes |
> | virtualclusters | Yes | > | workspaces | Yes | > | workspaces / batchEndpoints | Yes |
Jump to a resource provider namespace:
> | workspaces / models / versions | No | > | workspaces / onlineEndpoints | Yes | > | workspaces / onlineEndpoints / deployments | Yes |
+> | workspaces / registries | Yes |
> | workspaces / services | No | ## Microsoft.Maintenance
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | applyUpdates | No |
-> | configurationAssignments | No |
> | maintenanceConfigurations | Yes | > | publicMaintenanceConfigurations | No |
-> | updates | No |
## Microsoft.ManagedIdentity
Jump to a resource provider namespace:
> | - | -- | > | Identities | No | > | userAssignedIdentities | Yes |
+> | userAssignedIdentities / federatedIdentityCredentials | No |
## Microsoft.ManagedServices
Jump to a resource provider namespace:
> | accounts | Yes | > | accounts / creators | Yes | > | accounts / eventGridFilters | No |
-> | accounts / privateAtlases | Yes |
## Microsoft.Marketplace
Jump to a resource provider namespace:
> | publishers / offers / amendments | No | > | register | No |
-## Microsoft.MarketplaceApps
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | classicDevServices | Yes |
-> | updateCommunicationPreference | No |
- ## Microsoft.MarketplaceNotifications > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | mediaservices / eventGridFilters | No | > | mediaservices / graphInstances | No | > | mediaservices / graphTopologies | No |
-> | mediaservices / liveEventOperations | No |
> | mediaservices / liveEvents | Yes | > | mediaservices / liveEvents / liveOutputs | No |
-> | mediaservices / liveOutputOperations | No |
> | mediaservices / mediaGraphs | No |
-> | mediaservices / privateEndpointConnectionOperations | No |
> | mediaservices / privateEndpointConnectionProxies | No | > | mediaservices / privateEndpointConnections | No |
-> | mediaservices / streamingEndpointOperations | No |
> | mediaservices / streamingEndpoints | Yes | > | mediaservices / streamingLocators | No | > | mediaservices / streamingPolicies | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | holographicsBroadcastAccounts | Yes |
> | objectAnchorsAccounts | Yes | > | objectUnderstandingAccounts | Yes | > | remoteRenderingAccounts | Yes |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | mobileNetworks | No |
-> | mobileNetworks / dataNetworks | No |
-> | mobileNetworks / services | No |
-> | mobileNetworks / simPolicies | No |
-> | mobileNetworks / sites | No |
-> | mobileNetworks / slices | No |
-> | networks | No |
-> | networks / sites | No |
-> | packetCoreControlPlanes | No |
-> | packetCoreControlPlanes / packetCoreDataPlanes | No |
-> | packetCoreControlPlanes / packetCoreDataPlanes / attachedDataNetworks | No |
-> | packetCores | No |
-> | sims | No |
-> | sims / simProfiles | No |
+> | mobileNetworks | Yes |
+> | mobileNetworks / dataNetworks | Yes |
+> | mobileNetworks / services | Yes |
+> | mobileNetworks / simPolicies | Yes |
+> | mobileNetworks / sites | Yes |
+> | mobileNetworks / slices | Yes |
+> | networks | Yes |
+> | networks / sites | Yes |
+> | packetCoreControlPlanes | Yes |
+> | packetCoreControlPlanes / packetCoreDataPlanes | Yes |
+> | packetCoreControlPlanes / packetCoreDataPlanes / attachedDataNetworks | Yes |
+> | packetCores | Yes |
+> | sims | Yes |
+> | sims / simProfiles | Yes |
## Microsoft.Monitor
Jump to a resource provider namespace:
> | - | -- | > | netAppAccounts | Yes | > | netAppAccounts / accountBackups | No |
+> | netAppAccounts / backupPolicies | Yes |
> | netAppAccounts / capacityPools | Yes | > | netAppAccounts / capacityPools / volumes | Yes |
+> | netAppAccounts / capacityPools / volumes / backups | No |
+> | netAppAccounts / capacityPools / volumes / mountTargets | No |
> | netAppAccounts / capacityPools / volumes / snapshots | No | > | netAppAccounts / capacityPools / volumes / subvolumes | No |
+> | netAppAccounts / capacityPools / volumes / volumeQuotaRules | No |
> | netAppAccounts / snapshotPolicies | Yes |
+> | netAppAccounts / vaults | No |
> | netAppAccounts / volumeGroups | No |
-## Microsoft.NetworkFunction
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | azureTrafficCollectors | Yes |
## Microsoft.Network > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | applicationSecurityGroups | Yes | > | azureFirewallFqdnTags | No | > | azureFirewalls | Yes |
+> | azureWebCategories | No |
> | bastionHosts | Yes | > | bgpServiceCommunities | No | > | connections | Yes | > | customIpPrefixes | Yes | > | ddosCustomPolicies | Yes | > | ddosProtectionPlans | Yes |
-> | dnsOperationStatuses | No |
+> | dnsForwardingRulesets | Yes |
+> | dnsForwardingRulesets / forwardingRules | No |
+> | dnsForwardingRulesets / virtualNetworkLinks | No |
+> | dnsResolvers | Yes |
+> | dnsResolvers / inboundEndpoints | Yes |
+> | dnsResolvers / outboundEndpoints | Yes |
> | dnszones | Yes | > | dnszones / A | No | > | dnszones / AAAA | No |
Jump to a resource provider namespace:
> | expressRouteCrossConnections | Yes | > | expressRouteGateways | Yes | > | expressRoutePorts | Yes |
+> | expressRouteProviderPorts | No |
> | expressRouteServiceProviders | No | > | firewallPolicies | Yes | > | frontdoors | Yes |
+> | frontdoors / frontendEndpoints | No |
+> | frontdoors / frontendEndpoints / customHttpsConfiguration | No |
> | frontdoorWebApplicationFirewallManagedRuleSets | No | > | frontdoorWebApplicationFirewallPolicies | Yes | > | getDnsResourceReference | No | > | internalNotify | No |
-> | ipAllocations | Yes |
> | ipGroups | Yes | > | loadBalancers | Yes | > | localNetworkGateways | Yes | > | natGateways | Yes |
+> | networkExperimentProfiles | Yes |
> | networkIntentPolicies | Yes | > | networkInterfaces | Yes | > | networkManagers | Yes | > | networkProfiles | Yes | > | networkSecurityGroups | Yes |
+> | networkSecurityPerimeters | Yes |
> | networkVirtualAppliances | Yes | > | networkWatchers | Yes | > | networkWatchers / connectionMonitors | Yes |
Jump to a resource provider namespace:
> | networkWatchers / lenses | Yes | > | networkWatchers / pingMeshes | Yes | > | p2sVpnGateways | Yes |
-> | privateDnsOperationStatuses | No |
> | privateDnsZones | Yes | > | privateDnsZones / A | No | > | privateDnsZones / AAAA | No |
Jump to a resource provider namespace:
> | privateDnsZones / SRV | No | > | privateDnsZones / TXT | No | > | privateDnsZones / virtualNetworkLinks | Yes |
+> | privateDnsZonesInternal | No |
+> | privateEndpointRedirectMaps | Yes |
> | privateEndpoints | Yes |
+> | privateEndpoints / privateLinkServiceProxies | No |
> | privateLinkServices | Yes | > | publicIPAddresses | Yes | > | publicIPPrefixes | Yes |
Jump to a resource provider namespace:
> | virtualHubs | Yes | > | virtualNetworkGateways | Yes | > | virtualNetworks | Yes |
-> | virtualNetworks / subnets | No |
+> | virtualNetworks / privateDnsZoneLinks | No |
+> | virtualNetworks / taggedTrafficConsumers | No |
> | virtualNetworkTaps | Yes |
+> | virtualRouters | Yes |
> | virtualWans | Yes | > | vpnGateways | Yes | > | vpnServerConfigurations | Yes | > | vpnSites | Yes |
-> | webApplicationFirewallPolicies | Yes |
-## Microsoft.Notebooks
+## Microsoft.NetworkCloud
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | bareMetalMachines | Yes |
+> | clusterManagers | Yes |
+> | clusters | Yes |
+> | rackManifests | Yes |
+> | racks | Yes |
+> | virtualMachines | Yes |
+> | workloadNetworks | Yes |
+
+## Microsoft.NetworkFunction
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | NotebookProxies | No |
+> | azureTrafficCollectors | Yes |
+> | meshVpns | Yes |
+> | meshVpns / connectionPolicies | Yes |
+> | meshVpns / privateEndpointConnectionProxies | No |
+> | meshVpns / privateEndpointConnections | No |
## Microsoft.NotificationHubs
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | osNamespaces | No |
+> | osNamespaces | Yes |
## Microsoft.OffAzure
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | energyServices | No |
+> | energyServices | Yes |
-## Microsoft.OperationalInsights
+## Microsoft.OpenLogisticsPlatform
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | clusters | Yes |
-> | deletedWorkspaces | No |
-> | linkTargets | No |
-> | querypacks | Yes |
-> | storageInsightConfigs | No |
+> | applicationManagers | Yes |
+> | applicationManagers / applicationRegistrations | No |
+> | applicationManagers / eventGridFilters | No |
+> | applicationRegistrationInvites | No |
+> | applicationWorkspaces | Yes |
+> | applicationWorkspaces / applications | No |
+> | applicationWorkspaces / applications / applicationRegistrationInvites | No |
+> | shareInvites | No |
> | workspaces | Yes |
-> | workspaces / dataExports | No |
-> | workspaces / dataSources | No |
-> | workspaces / linkedServices | No |
-> | workspaces / linkedStorageAccounts | No |
-> | workspaces / metadata | No |
-> | workspaces / query | No |
-> | workspaces / scopedPrivateLinkProxies | No |
-> | workspaces / storageInsightConfigs | No |
-> | workspaces / tables | No |
+> | workspaces / applicationRegistrations | No |
+> | workspaces / applications | No |
+> | workspaces / eventGridFilters | No |
+> | workspaces / shares | No |
+> | workspaces / shareSubscriptions | No |
-## Microsoft.OperationsManagement
+## Microsoft.Orbital
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | managementassociations | No |
-> | managementconfigurations | Yes |
-> | solutions | Yes |
-> | views | Yes |
+> | contactProfiles | Yes |
+> | edgeSites | Yes |
+> | globalCommunicationsSites | No |
+> | groundStations | Yes |
+> | l2Connections | Yes |
+> | l3Connections | Yes |
+> | orbitalGateways | Yes |
+> | spacecrafts | Yes |
+> | spacecrafts / contacts | No |
## Microsoft.Peering
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | PlayerAccountPools | No |
-> | Titles | No |
+> | playeraccountpools | Yes |
+> | titles | Yes |
+> | titles / segments | No |
+> | titles / titledatakeyvalues | No |
+> | titles / titleinternaldatakeyvalues | No |
## Microsoft.PolicyInsights
Jump to a resource provider namespace:
> | - | -- | > | accounts | Yes | > | deletedAccounts | No |
+> | getDefaultAccount | No |
+> | removeDefaultAccount | No |
+> | setDefaultAccount | No |
## Microsoft.ProviderHub
Jump to a resource provider namespace:
> | - | -- | > | accounts | Yes | > | accounts / kafkaConfigurations | No |
-> | deletedAccounts | No |
> | getDefaultAccount | No | > | removeDefaultAccount | No | > | setDefaultAccount | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | Workspaces | No |
+> | Workspaces | Yes |
## Microsoft.Quota
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | accounts | No |
-> | accounts / modeling | No |
-> | accounts / serviceEndpoints | No |
+> | accounts | Yes |
+> | accounts / modeling | Yes |
+> | accounts / serviceEndpoints | Yes |
## Microsoft.RecoveryServices
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | availabilityStatuses | No |
-> | childAvailabilityStatuses | No |
> | childResources | No | > | emergingissues | No | > | events | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | builtInTemplateSpecs | No |
+> | builtInTemplateSpecs / versions | No |
+> | bulkDelete | No |
+> | calculateTemplateHash | No |
> | deployments | No |
-> | deployments / operations | No |
> | deploymentScripts | Yes | > | deploymentScripts / logs | No |
-> | deploymentStacks | No |
> | deploymentStacks / snapshots | No | > | links | No |
+> | notifyResourceJobs | No |
> | providers | No | > | resourceGroups | No |
+> | resources | No |
> | subscriptions | No |
+> | subscriptions / providers | No |
+> | subscriptions / resourceGroups | No |
+> | subscriptions / resourcegroups / resources | No |
+> | subscriptions / resources | No |
+> | subscriptions / tagnames | No |
+> | subscriptions / tagNames / tagValues | No |
+> | tags | No |
> | templateSpecs | Yes | > | templateSpecs / versions | Yes | > | tenants | No |
Jump to a resource provider namespace:
> | resources | Yes | > | saasresources | No |
-## Microsoft.Scheduler
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | jobcollections | Yes |
- ## Microsoft.Scom > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | managedInstances | No |
+> | managedInstances | Yes |
## Microsoft.ScVmm > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | AvailabilitySets | No |
-> | clouds | No |
-> | VirtualMachines | No |
-> | VirtualMachineTemplates | No |
-> | VirtualNetworks | No |
-> | vmmservers | No |
+> | availabilitysets | Yes |
+> | Clouds | Yes |
+> | VirtualMachines | Yes |
+> | VirtualMachineTemplates | Yes |
+> | VirtualNetworks | Yes |
+> | vmmservers | Yes |
> | VMMServers / InventoryItems | No | ## Microsoft.Search
Jump to a resource provider namespace:
> | alertsSuppressionRules | No | > | allowedConnections | No | > | antiMalwareSettings | No |
-> | applicationWhitelistings | No |
> | assessmentMetadata | No | > | assessments | No | > | assessments / governanceAssignments | No |
Jump to a resource provider namespace:
> | securityStatuses | No | > | securityStatusesSummaries | No | > | serverVulnerabilityAssessments | No |
+> | serverVulnerabilityAssessmentsSettings | No |
> | settings | No | > | sqlVulnerabilityAssessments | No | > | standards | Yes |
Jump to a resource provider namespace:
> | topologies | No | > | workspaceSettings | No |
-## Microsoft.SecurityGraph
+## Microsoft.SecurityDetonation
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | diagnosticSettings | No |
-> | diagnosticSettingsCategories | No |
+> | chambers | Yes |
+
+## Microsoft.SecurityDevOps
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | gitHubConnectors | Yes |
+> | gitHubConnectors / gitHubRepos | No |
## Microsoft.SecurityInsights
Jump to a resource provider namespace:
> | bookmarks | No | > | cases | No | > | dataConnectors | No |
-> | dataConnectorsCheckRequirements | No |
> | enrichment | No | > | entities | No |
-> | entityQueries | No |
> | entityQueryTemplates | No |
+> | fileImports | No |
> | incidents | No | > | metadata | No | > | MitreCoverageRecords | No |
-> | officeConsents | No |
> | onboardingStates | No |
+> | securityMLAnalyticsSettings | No |
> | settings | No | > | sourceControls | No | > | threatIntelligence | No |
-> | watchlists | No |
## Microsoft.SerialConsole
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | applications | Yes |
> | clusters | Yes | > | clusters / applications | No |
-> | containerGroups | Yes |
-> | containerGroupSets | Yes |
> | edgeclusters | Yes | > | edgeclusters / applications | No | > | managedclusters | Yes |
Jump to a resource provider namespace:
> | managedclusters / applicationTypes | No | > | managedclusters / applicationTypes / versions | No | > | managedclusters / nodetypes | No |
-> | networks | Yes |
-> | secretstores | Yes |
-> | secretstores / certificates | No |
-> | secretstores / secrets | No |
-> | volumes | Yes |
-
-## Microsoft.ServiceFabricMesh
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | applications | Yes |
-> | containerGroups | Yes |
-> | gateways | Yes |
-> | networks | Yes |
-> | secrets | Yes |
-> | volumes | Yes |
## Microsoft.ServiceLinker
Jump to a resource provider namespace:
> | dryruns | No | > | linkers | No |
-## Microsoft.Services
+## Microsoft.ServicesHub
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | providerRegistrations | No |
-> | providerRegistrations / resourceTypeRegistrations | No |
-> | rollouts | Yes |
+> | connectors | Yes |
+> | supportOfferingEntitlement | No |
+> | workspaces | No |
## Microsoft.SignalRService
Jump to a resource provider namespace:
> | accounts / groupPolicies | No | > | accounts / jobs | No | > | accounts / models | No |
+> | accounts / networks | No |
> | accounts / storageContainers | No | > | images | No | > | quotas | No |
Jump to a resource provider namespace:
> | applications | Yes | > | jitRequests | Yes |
-## Microsoft.SQL
+## Microsoft.Sql
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion |
Jump to a resource provider namespace:
> | managedInstances / administrators | No | > | managedInstances / databases | Yes | > | managedInstances / databases / backupLongTermRetentionPolicies | No |
-> | managedInstances / databases / backupShortTermRetentionPolicies | No |
-> | managedInstances / databases / schemas / tables / columns / sensitivityLabels | No |
> | managedInstances / databases / vulnerabilityAssessments | No |
-> | managedInstances / databases / vulnerabilityAssessments / rules / baselines | No |
-> | managedInstances / encryptionProtector | No |
-> | managedInstances / keys | No |
-> | managedInstances / restorableDroppedDatabases / backupShortTermRetentionPolicies | No |
+> | managedInstances / dnsAliases | No |
+> | managedInstances / metricDefinitions | No |
+> | managedInstances / metrics | No |
+> | managedInstances / recoverableDatabases | No |
> | managedInstances / sqlAgent | No |
+> | managedInstances / start | No |
+> | managedInstances / startStopSchedules | No |
+> | managedInstances / stop | No |
+> | managedInstances / tdeCertificates | No |
> | managedInstances / vulnerabilityAssessments | No | > | servers | Yes | > | servers / administrators | No |
+> | servers / advancedThreatProtectionSettings | No |
> | servers / advisors | No |
+> | servers / aggregatedDatabaseMetrics | No |
> | servers / auditingSettings | No |
+> | servers / automaticTuning | No |
> | servers / communicationLinks | No |
+> | servers / connectionPolicies | No |
> | servers / databases | Yes |
+> | servers / databases / activate | No |
+> | servers / databases / activatedatabase | No |
+> | servers / databases / advancedThreatProtectionSettings | No |
> | servers / databases / advisors | No | > | servers / databases / auditingSettings | No |
+> | servers / databases / auditRecords | No |
+> | servers / databases / automaticTuning | No |
> | servers / databases / backupLongTermRetentionPolicies | No | > | servers / databases / backupShortTermRetentionPolicies | No |
+> | servers / databases / databaseState | No |
> | servers / databases / dataMaskingPolicies | No |
+> | servers / databases / dataMaskingPolicies / rules | No |
+> | servers / databases / deactivate | No |
+> | servers / databases / deactivatedatabase | No |
> | servers / databases / extensions | No |
+> | servers / databases / geoBackupPolicies | No |
+> | servers / databases / ledgerDigestUploads | No |
+> | servers / databases / metricDefinitions | No |
+> | servers / databases / metrics | No |
+> | servers / databases / recommendedSensitivityLabels | No |
> | servers / databases / securityAlertPolicies | No | > | servers / databases / syncGroups | No | > | servers / databases / syncGroups / syncMembers | No |
+> | servers / databases / topQueries | No |
+> | servers / databases / topQueries / queryText | No |
> | servers / databases / transparentDataEncryption | No |
+> | servers / databases / VulnerabilityAssessment | No |
+> | servers / databases / vulnerabilityAssessments | No |
+> | servers / databases / VulnerabilityAssessmentScans | No |
+> | servers / databases / VulnerabilityAssessmentSettings | No |
> | servers / databases / workloadGroups | No |
+> | servers / databaseSecurityPolicies | No |
+> | servers / devOpsAuditingSettings | No |
+> | servers / disasterRecoveryConfiguration | No |
+> | servers / dnsAliases | No |
+> | servers / elasticPoolEstimates | No |
> | servers / elasticpools | Yes |
+> | servers / elasticPools / advisors | No |
+> | servers / elasticpools / metricdefinitions | No |
+> | servers / elasticpools / metrics | No |
> | servers / encryptionProtector | No |
+> | servers / extendedAuditingSettings | No |
> | servers / failoverGroups | No |
-> | servers / firewallRules | No |
+> | servers / import | No |
+> | servers / jobAccounts | Yes |
> | servers / jobAgents | Yes | > | servers / jobAgents / jobs | No |
-> | servers / jobAgents / jobs / steps | No |
> | servers / jobAgents / jobs / executions | No |
+> | servers / jobAgents / jobs / steps | No |
> | servers / keys | No |
+> | servers / recommendedElasticPools | No |
+> | servers / recoverableDatabases | No |
> | servers / restorableDroppedDatabases | No |
-> | servers / serviceobjectives | No |
+> | servers / securityAlertPolicies | No |
+> | servers / serviceObjectives | No |
+> | servers / syncAgents | No |
> | servers / tdeCertificates | No |
+> | servers / usages | No |
> | servers / virtualNetworkRules | No |
-> | virtualClusters | No |
+> | servers / vulnerabilityAssessments | No |
+> | virtualClusters | Yes |
## Microsoft.SqlVirtualMachine
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | SqlVirtualMachineGroups | Yes |
-> | SqlVirtualMachineGroups / AvailabilityGroupListeners | No |
> | SqlVirtualMachines | Yes | ## Microsoft.Storage
Jump to a resource provider namespace:
> | caches / storageTargets | No | > | usageModels | No |
-## Microsoft.StorageReplication
+## Microsoft.StoragePool
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | replicationGroups | No |
+> | diskPools | Yes |
+> | diskPools / iscsiTargets | No |
## Microsoft.StorageSync
Jump to a resource provider namespace:
> | cancel | No | > | changeTenantRequest | No | > | changeTenantStatus | No |
-> | CreateSubscription | No |
> | enable | No | > | policies | No | > | rename | No | > | SubscriptionDefinitions | No |
-> | SubscriptionOperations | No |
> | subscriptions | No |
+## microsoft.support
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | lookUpResourceId | No |
+> | services | No |
+> | services / problemclassifications | No |
+> | supporttickets | No |
+ ## Microsoft.Synapse > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | kustoOperations | No |
> | privateLinkHubs | Yes | > | workspaces | Yes | > | workspaces / bigDataPools | Yes |
Jump to a resource provider namespace:
> | workspaces / kustoPools / attacheddatabaseconfigurations | No | > | workspaces / kustoPools / databases | No | > | workspaces / kustoPools / databases / dataconnections | No |
-> | workspaces / operationStatuses | No |
> | workspaces / sqlDatabases | Yes | > | workspaces / sqlPools | Yes |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | testBaseAccounts | No |
+> | testBaseAccounts | Yes |
> | testBaseAccounts / customerEvents | No | > | testBaseAccounts / emailEvents | No | > | testBaseAccounts / flightingRings | No |
-> | testBaseAccounts / packages | No |
+> | testBaseAccounts / packages | Yes |
> | testBaseAccounts / packages / favoriteProcesses | No | > | testBaseAccounts / packages / osUpdates | No | > | testBaseAccounts / testSummaries | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | accounts | No |
+> | accounts | Yes |
## Microsoft.VirtualMachineImages
Jump to a resource provider namespace:
> | imageTemplates | Yes | > | imageTemplates / runOutputs | No |
+## microsoft.visualstudio
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | account | Yes |
+> | account / extension | Yes |
+> | account / project | Yes |
+ ## Microsoft.VMware > [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | arczones | No |
-> | resourcepools | No |
-> | vcenters | No |
+> | arczones | Yes |
+> | resourcepools | Yes |
+> | vcenters | Yes |
> | VCenters / InventoryItems | No |
-> | virtualmachines | No |
-> | virtualmachinetemplates | No |
-> | virtualnetworks | No |
+> | virtualmachines | Yes |
+> | virtualmachinetemplates | Yes |
+> | virtualnetworks | Yes |
## Microsoft.VMwareCloudSimple
Jump to a resource provider namespace:
> | accounts | Yes | > | plans | Yes | > | registeredSubscriptions | No |+ ## Microsoft.Web > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | certificates | Yes | > | connectionGateways | Yes | > | connections | Yes |
+> | containerApps | Yes |
> | customApis | Yes |
+> | customhostnameSites | No |
> | deletedSites | No | > | functionAppStacks | No | > | generateGithubAccessTokenForAppserviceCLI | No |
Jump to a resource provider namespace:
> | serverFarms / firstPartyApps | No | > | serverFarms / firstPartyApps / keyVaultSettings | No | > | sites | Yes |
-> | sites/config | No |
> | sites / eventGridFilters | No | > | sites / hostNameBindings | No | > | sites / networkConfig | No |
Jump to a resource provider namespace:
> | sites / slots / networkConfig | No | > | sourceControls | No | > | staticSites | Yes |
+> | staticSites / builds | No |
+> | staticSites / builds / userProvidedFunctionApps | No |
+> | staticSites / userProvidedFunctionApps | No |
> | validate | No | > | verifyHostingEnvironmentVnet | No | > | webAppStacks | No |-
-## Microsoft.WindowsDefenderATP
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | diagnosticSettings | No |
-> | diagnosticSettingsCategories | No |
+> | workerApps | Yes |
## Microsoft.WindowsESU
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | migrationAgents | No |
-> | workloads | No |
+> | migrationAgents | Yes |
+> | workloads | Yes |
> | workloads / instances | No | > | workloads / versions | No | > | workloads / versions / artifacts | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | monitors | No |
+> | monitors | Yes |
> | monitors / providerInstances | No |
-> | phpWorkloads | No |
+> | phpWorkloads | Yes |
> | phpWorkloads / wordpressInstances | No |
-> | sapVirtualInstances | No |
-> | sapVirtualInstances / applicationInstances | No |
-> | sapVirtualInstances / centralInstances | No |
-> | sapVirtualInstances / databaseInstances | No |
+> | sapVirtualInstances | Yes |
+> | sapVirtualInstances / applicationInstances | Yes |
+> | sapVirtualInstances / centralInstances | Yes |
+> | sapVirtualInstances / databaseInstances | Yes |
## Next steps
azure-signalr Signalr Quickstart Azure Functions Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-python.md
Title: Azure SignalR Service serverless quickstart - Python
description: A quickstart for using Azure SignalR Service and Azure Functions to create an App showing GitHub star count using Python. Previously updated : 06/09/2021 Last updated : 04/19/2022 ms.devlang: python
-# Quickstart: Create an App showing GitHub star count with Azure Functions and SignalR Service using Python
+# Quickstart: Create a serverless app with Azure Functions, SignalR Service, and Python
-Azure SignalR Service lets you easily add real-time functionality to your application. Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, learn how to use SignalR Service and Azure Functions to build a serverless application with Python to broadcast messages to clients.
+Get started with Azure SignalR Service by using Azure Functions and Python to build a serverless application that broadcasts messages to clients. You'll run the function in the local environment, connecting to an Azure SignalR Service instance in the cloud. Completing this quickstart incurs a small cost of a few USD cents or less in your Azure Account.
> [!NOTE]
-> You can get all codes mentioned in the article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/python)
+> You can get the code in this article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/python).
## Prerequisites This quickstart can be run on macOS, Windows, or Linux.
-Make sure you have a code editor such as [Visual Studio Code](https://code.visualstudio.com/) installed.
+- You'll need a code editor such as [Visual Studio Code](https://code.visualstudio.com/).
-Install the [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (version 2.7.1505 or higher) to run Python Azure Function apps locally.
+- Install the [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (version 2.7.1505 or higher) to run Python Azure Function apps locally.
-Azure Functions requires [Python 3.6+](https://www.python.org/downloads/). (See [Supported Python versions](../azure-functions/functions-reference-python.md#python-version))
+- Azure Functions requires [Python 3.6+](https://www.python.org/downloads/). (See [Supported Python versions](../azure-functions/functions-reference-python.md#python-version).)
-
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qspython).
-
-## Log in to Azure
+- SignalR binding needs Azure Storage, but you can use a local storage emulator when a function is running locally. You'll need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md).
-Sign in to the Azure portal at <https://portal.azure.com/> with your Azure account.
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qspython).
+## Create an Azure SignalR Service instance
[!INCLUDE [Create instance](includes/signalr-quickstart-create-instance.md)]
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qspython).
-- ## Setup and run the Azure Function locally
-1. Make sure you have Azure Function Core Tools installed. And create an empty directory and navigate to the directory with command line.
+1. Create an empty directory and go to the directory with command line.
```bash # Initialize a function project func init --worker-runtime python ```
-2. After you initialize a project, you need to create functions. In this sample, we need to create 3 functions.
+2. After you initialize a project, you need to create functions. In this sample, we need to create three functions: `index`, `negotiate`, and `broadcast`.
- 1. Run the following command to create a `index` function, which will host a web page for client.
+ 1. Run the following command to create an `index` function, which will host a web page for a client.
```bash func new -n index -t HttpTrigger ```
- Open `index/function.json` and copy the following json codes:
+
+ Open *index/function.json* and copy the following json code:
```json {
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
} ```
- Open `index/__init__.py` and copy the following codes.
+ Open *index/\__init\__.py* and copy the following code:
```javascript import os import azure.functions as func
-
-
+
def main(req: func.HttpRequest) -> func.HttpResponse: f = open(os.path.dirname(os.path.realpath(__file__)) + '/../content/https://docsupdatetracker.net/index.html') return func.HttpResponse(f.read(), mimetype='text/html') ```
-
+ 2. Create a `negotiate` function for clients to get access token.
-
+ ```bash func new -n negotiate -t HttpTrigger ```
-
- Open `negotiate/function.json` and copy the following json codes:
-
+
+ Open *negotiate/function.json* and copy the following json code:
+ ```json { "scriptFile": "__init__.py",
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
} ```
- And open the `negotiate/__init__.py` and copy the following codes:
+ Open *negotiate/\__init\__.py* and copy the following code:
```python import azure.functions as func
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
def main(req: func.HttpRequest, connectionInfo) -> func.HttpResponse: return func.HttpResponse(connectionInfo) ```
-
+ 3. Create a `broadcast` function to broadcast messages to all clients. In the sample, we use time trigger to broadcast messages periodically. ```bash
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
pip install requests ```
- Open `broadcast/function.json` and copy the following codes.
+ Open *broadcast/function.json* and copy the following code:
```json {
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
} ```
- Open `broadcast/__init__.py` and copy the following codes.
+ Open *broadcast/\__init\__.py* and copy the following code:
```python import requests
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
})) ```
-3. The client interface of this sample is a web page. Considered we read HTML content from `content/https://docsupdatetracker.net/index.html` in `index` function, create a new file `https://docsupdatetracker.net/index.html` in `content` directory under your project root folder. And copy the following content.
+3. The client interface of this sample is a web page. We read HTML content from *content/https://docsupdatetracker.net/index.html* in the `index` function, and then create a new file *https://docsupdatetracker.net/index.html* in the `content` directory under your project root folder. Copy the following content:
```html <html>
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
</html> ```
-
-4. It's almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
- 1. In the browser where the Azure portal is opened, confirm the SignalR Service instance you deployed earlier was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
+4. We're almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
+
+ 1. In the Azure portal, search for the SignalR Service instance you deployed earlier. Select the instance to open it.
![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png) 2. Select **Keys** to view the connection strings for the SignalR Service instance.
-
+ ![Screenshot that highlights the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png)
- 3. Copy the primary connection string. And execute the command below.
-
+ 3. Copy the primary connection string, and then run the following command:
+ ```bash func settings add AzureSignalRConnectionString "<signalr-connection-string>" ```
-
-5. Run the Azure Function in local:
+
+5. Run the Azure Function in the local environment:
```bash func start ```
- After Azure Function running locally. Use your browser to visit `http://localhost:7071/api/index` and you can see the current star count. And if you star or unstar in the GitHub, you will get a star count refreshing every few seconds.
+ After the Azure Function is running locally, go to `http://localhost:7071/api/index` and you'll see the current star count. If you star or unstar in GitHub, you'll get a refreshed star count every few seconds.
> [!NOTE]
- > SignalR binding needs Azure Storage, but you can use local storage emulator when the Function is running locally.
- > If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
+ > SignalR binding needs Azure Storage, but you can use a local storage emulator when the function is running locally.
+ > You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md) if you got an error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.`
[!INCLUDE [Cleanup](includes/signalr-quickstart-cleanup.md)]
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Next steps
-In this quickstart, you built and ran a real-time serverless application in local. Learn more how to use SignalR Service bindings for Azure Functions.
-Next, learn more about how to bi-directional communicating between clients and Azure Function with SignalR Service.
+In this quickstart, you built and ran a real-time serverless application in local. Next, learn more about how to use bi-directional communicating between clients and Azure Function with SignalR Service.
> [!div class="nextstepaction"] > [SignalR Service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md)
azure-sql Authentication Azure Ad Only Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-only-authentication.md
When Azure AD-only authentication is enabled for SQL Database, the following fea
- [SQL Data Sync](sql-data-sync-data-sql-server-sql-database.md) - [Change data capture (CDC)](/sql/relational-databases/track-changes/about-change-data-capture-sql-server) - If you create a database in Azure SQL Database as an Azure AD user and enable change data capture on it, a SQL user will not be able to disable or make changes to CDC artifacts. However, another Azure AD user will be able to enable or disable CDC on the same database. Similarly, if you create an Azure SQL Database as a SQL user, enabling or disabling CDC as an Azure AD user won't work - [Transactional replication](../managed-instance/replication-transactional-overview.md) - Since SQL authentication is required for connectivity between replication participants, when Azure AD-only authentication is enabled, transactional replication is not supported for SQL Database for scenarios where transactional replication is used to push changes made in an Azure SQL Managed Instance, on-premises SQL Server, or an Azure VM SQL Server instance to a database in Azure SQL Database-- [SQL insights](../../azure-monitor/insights/sql-insights-overview.md)
+- [SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md)
- EXEC AS statement for Azure AD group member accounts ### Limitations for Azure AD-only authentication in Managed Instance
When Azure AD-only authentication is enabled for Managed Instance, the following
- [Transactional replication](../managed-instance/replication-transactional-overview.md) - [SQL Agent Jobs in Managed Instance](../managed-instance/job-automation-managed-instance.md) supports Azure AD-only authentication. However, the Azure AD user who is a member of an Azure AD group that has access to the managed instance cannot own SQL Agent Jobs-- [SQL insights](../../azure-monitor/insights/sql-insights-overview.md)
+- [SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md)
- EXEC AS statement for Azure AD group member accounts For more limitations, see [T-SQL differences between SQL Server & Azure SQL Managed Instance](../managed-instance/transact-sql-tsql-differences-sql-server.md#logins-and-users).
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/doc-changes-updates-release-notes-whats-new.md
The following table lists the features of Azure SQL Database that are currently
| [Query Store hints](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current&preserve-view=true) | Use query hints to optimize your query execution via the OPTION clause. | | [Reverse migrate from Hyperscale](manage-hyperscale-database.md#reverse-migrate-from-hyperscale) | Reverse migration to the General Purpose service tier allows customers who have recently migrated an existing database in Azure SQL Database to the Hyperscale service tier to move back in an emergency, should Hyperscale not meet their needs. While reverse migration is initiated by a service tier change, it's essentially a size-of-data move between different architectures. | | [SQL Analytics](../../azure-monitor/insights/azure-sql.md)|Azure SQL Analytics is an advanced cloud monitoring solution for monitoring performance of all of your Azure SQL databases at scale and across multiple subscriptions in a single view. Azure SQL Analytics collects and visualizes key performance metrics with built-in intelligence for performance troubleshooting.|
-| [SQL insights](../../azure-monitor/insights/sql-insights-overview.md) | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance.|
+| [SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md) | SQL Insights (preview) is a comprehensive solution for monitoring any product in the Azure SQL family. SQL Insights (preview) uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance.|
| [Zone redundant configuration for Hyperscale databases](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview) | The zone redundant configuration feature utilizes [Azure Availability Zones](../../availability-zones/az-overview.md#availability-zones) to replicate databases across multiple physical locations within an Azure region. By selecting [zone redundancy](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview), you can make your Hyperscale databases resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the application logic.| |||
Learn about significant changes to the Azure SQL Database documentation.
| | | | **GA for maintenance window** | The [maintenance window](maintenance-window.md) feature allows you to configure a maintenance schedule for your Azure SQL Database and receive advance notifications of maintenance windows. [Maintenance window advance notifications](../database/advance-notifications.md) are in public preview for databases configured to use a non-default [maintenance window](maintenance-window.md).| | **Hyperscale zone redundant configuration preview** | It's now possible to create new Hyperscale databases with zone redundancy to make your databases resilient to a much larger set of failures. This feature is currently in preview for the Hyperscale service tier. To learn more, see [Hyperscale zone redundancy](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview). |
-| **Hyperscale storage redundancy GA** | Choosing your storage redundancy for your databases in the Hyperscale service tier is now generally available. See [Configure backup storage redundancy](automated-backups-overview.md#configure-backup-storage-redundancy) to learn more.
+| **Hyperscale storage redundancy GA** | Choosing your storage redundancy for your databases in the Hyperscale service tier is now generally available. See [Configure backup storage redundancy](automated-backups-overview.md#configure-backup-storage-redundancy) to learn more. |
+ ### February 2022
Learn about significant changes to the Azure SQL Database documentation.
| **New Hyperscale articles** | We have reorganized some existing content into new articles and added new content for Hyperscale. Learn about [Hyperscale distributed functions architecture](hyperscale-architecture.md), [how to manage a Hyperscale database](manage-hyperscale-database.md), and how to [create a Hyperscale database](hyperscale-database-create-quickstart.md). | | **Free Azure SQL Database** | Try Azure SQL Database for free using the Azure free account. To learn more, review [Try SQL Database for free](free-sql-db-free-account-how-to-deploy.md).| - ### 2021 | Changes | Details |
Learn about significant changes to the Azure SQL Database documentation.
| **Maintenance window** | The maintenance window feature allows you to configure a maintenance schedule for your Azure SQL Database, currently in preview. To learn more, see [maintenance window](maintenance-window.md).| | **SQL insights** | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. To learn more, see [SQL insights](../../azure-monitor/insights/sql-insights-overview.md). | - ## Contribute to content To contribute to the Azure SQL documentation, see the [Docs contributor guide](/contribute/).
azure-sql Hyperscale Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/hyperscale-performance-diagnostics.md
Data IO against remote page servers is not reported in resource utilization view
## Additional resources - For vCore resource limits for a Hyperscale single database see [Hyperscale service tier vCore Limits](resource-limits-vcore-single-databases.md#hyperscaleprovisioned-computegen5)-- For monitoring Azure SQL Databases, enable [Azure Monitor SQL insights](../../azure-monitor/insights/sql-insights-overview.md)
+- For monitoring Azure SQL Databases, enable [Azure Monitor SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md)
- For Azure SQL Database performance tuning, see [Query performance in Azure SQL Database](performance-guidance.md) - For performance tuning using Query Store, see [Performance monitoring using Query store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store/) - For DMV monitoring scripts, see [Monitoring performance Azure SQL Database using dynamic management views](monitoring-with-dmvs.md)
azure-sql Intelligent Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/intelligent-insights-overview.md
ms.devlang: --++ Previously updated : 10/18/2021 Last updated : 01/31/2022 # Intelligent Insights using AI to monitor and troubleshoot database performance (preview) [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)] Intelligent Insights in Azure SQL Database and Azure SQL Managed Instance lets you know what is happening with your database performance.
-Intelligent Insights uses built-in intelligence to continuously monitor database usage through artificial intelligence and detect disruptive events that cause poor performance. Once detected, a detailed analysis is performed that generates an Intelligent Insights resource log (called SQLInsights) with an intelligent assessment of the issue. This assessment consists of a root cause analysis of the database performance issue and, where possible, recommendations for performance improvements.
+Intelligent Insights uses built-in intelligence to continuously monitor database usage through artificial intelligence and detect disruptive events that cause poor performance. Once detected, a detailed analysis is performed that generates an Intelligent Insights resource log called SQLInsights (unrelated to [Azure Monitor SQL Insights (preview)](../../azure-sql/database/monitoring-sql-database-azure-monitor.md)) with an [intelligent assessment of the issues](intelligent-insights-troubleshoot-performance.md). This assessment consists of a root cause analysis of the database performance issue and, where possible, recommendations for performance improvements.
-## What can Intelligent Insights do for you
+## What can Intelligent Insights do for you?
Intelligent Insights is a unique capability of Azure built-in intelligence that provides the following value:
After a performance degradation issue is detected from multiple observed metrics
The metrics used to measure and detect database performance issues are based on query duration, timeout requests, excessive wait times, and errored requests. For more information on metrics, see [Detection metrics](#detection-metrics).
-Identified database performance degradations are recorded in the SQLInsights log with intelligent entries that consist of the following properties:
+Identified database performance degradations are recorded in the Intelligent Insights SQLInsights log with intelligent entries that consist of the following properties:
| Property | Details | | :- | - |
azure-sql Intelligent Insights Use Diagnostics Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/intelligent-insights-use-diagnostics-log.md
ms.devlang: --++ Previously updated : 06/12/2020 Last updated : 01/31/2022 # Use the Intelligent Insights performance diagnostics log of Azure SQL Database and Azure SQL Managed Instance performance issues
This page provides information on how to use the performance diagnostics log gen
## Log header
-The diagnostics log uses JSON standard format to output Intelligent Insights findings. The exact category property for accessing an Intelligent Insights log is the fixed value "SQLInsights".
+The diagnostics log uses JSON standard format to output Intelligent Insights findings. The exact category property for accessing an Intelligent Insights log is the fixed value "SQLInsights", unrelated to [Monitoring Azure SQL Database with Azure Monitor SQL Insights (preview)](../../azure-sql/database/monitoring-sql-database-azure-monitor.md).
The header of the log is common and consists of the time stamp (TimeGenerated) that shows when an entry was created. It also includes a resource ID (ResourceId) that refers to the particular database the entry relates to. The category (Category), level (Level), and operation name (OperationName) are fixed properties whose values do not change. They indicate that the log entry is informational and that it comes from Intelligent Insights (SQLInsights).
azure-sql Metrics Diagnostic Telemetry Logging Streaming Export Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure.md
-+ Last updated 3/10/2022
You will also learn about the destinations to which you can stream this diagnost
## Diagnostic telemetry for export
-Most important among the diagnostic telemetry that you can export is the Intelligent Insights (SQLInsights) log. [Intelligent Insights](intelligent-insights-overview.md) uses built-in intelligence to continuously monitor database usage through artificial intelligence and detect disruptive events that cause poor performance. Once detected, a detailed analysis is performed that generates an Intelligent Insights log with an intelligent assessment of the issue. This assessment consists of a root cause analysis of the database performance issue and, where possible, recommendations for performance improvements. You need to configure the streaming export of this log to view its contents.
+Most important among the diagnostic telemetry that you can export is the Intelligent Insights (SQLInsights) log (unrelated to [Azure Monitor SQL Insights (preview)](../../azure-sql/database/monitoring-sql-database-azure-monitor.md)). [Intelligent Insights](intelligent-insights-overview.md) uses built-in intelligence to continuously monitor database usage through artificial intelligence and detect disruptive events that cause poor performance. Once detected, a detailed analysis is performed that generates a Intelligent Insights log with an intelligent assessment of the issue. This assessment consists of a root cause analysis of the database performance issue and, where possible, recommendations for performance improvements. You need to configure the streaming export of this log to view its contents.
In addition to streaming the export of the Intelligent Insights log, you can also export a variety of performance metrics and additional database logs. The following table describes the performance metrics and resources logs that you can configure for streaming export to one of several destinations. This diagnostic telemetry can be configured for single databases, elastic pools and pooled databases, and managed instances and instance databases.
azure-sql Monitor Tune Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/monitor-tune-overview.md
ms.devlang: --++ Previously updated : 03/17/2021 Last updated : 04/14/2022 # Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-To monitor the performance of a database in Azure SQL Database and Azure SQL Managed Instance, start by monitoring the CPU and IO resources used by your workload relative to the level of database performance you chose in selecting a particular service tier and performance level. To accomplish this, Azure SQL Database and Azure SQL Managed Instance emit resource metrics that can be viewed in the Azure portal or by using one of these SQL Server management tools: [Azure Data Studio](/sql/azure-data-studio/what-is) or [SQL Server Management Studio](/sql/ssms/sql-server-management-studio-ssms) (SSMS).
+To monitor the performance of a database in Azure SQL Database and Azure SQL Managed Instance, start by monitoring the CPU and IO resources used by your workload relative to the level of database performance you chose in selecting a particular service tier and performance level. To accomplish this, Azure SQL Database and Azure SQL Managed Instance emit resource metrics that can be viewed in the Azure portal or by using one of these SQL Server management tools:
+ - [Azure Data Studio](/sql/azure-data-studio/what-is), based on [Visual Studio Code](https://code.visualstudio.com/).
+ - [SQL Server Management Studio](/sql/ssms/sql-server-management-studio-ssms) (SSMS), based on [Microsoft Visual Studio](https://visualstudio.microsoft.com/downloads/).
-Azure SQL Database provides a number of Database Advisors to provide intelligent performance tuning recommendations and automatic tuning options to improve performance. Additionally, Query Performance Insight shows you details about the queries responsible for the most CPU and IO usage for single and pooled databases.
+| Monitoring solution | SQL Database | SQL Managed Instance | Requires agent on a customer-owned VM |
+|:--|:--|:--|
+| [Query Performance Insight](#generate-intelligent-assessments-of-performance-issues) | **Yes** | No | No |
+| [Monitor using DMVs](monitoring-with-dmvs.md) | **Yes** | **Yes** | No |
+| [Monitor using query store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store) | **Yes** | **Yes** | No |
+| [SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md) in [Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) | **Yes** | **Yes** | **Yes** |
+| [Azure SQL Analytics (preview)](../../azure-monitor/insights/azure-sql.md) using [Azure Monitor Logs](../../azure-monitor/logs/data-platform-logs.md) \* | **Yes** | **Yes** | No |
-Azure SQL Database and Azure SQL Managed Instance provide advanced monitoring and tuning capabilities backed by artificial intelligence to assist you in troubleshooting and maximizing the performance of your databases and solutions. You can choose to configure the [streaming export](metrics-diagnostic-telemetry-logging-streaming-export-configure.md) of these [Intelligent Insights](intelligent-insights-overview.md) and other database resource logs and metrics to one of several destinations for consumption and analysis, particularly using [SQL Analytics](../../azure-monitor/insights/azure-sql.md). Azure SQL Analytics is an advanced cloud monitoring solution for monitoring performance of all of your databases at scale and across multiple subscriptions in a single view. For a list of the logs and metrics that you can export, see [diagnostic telemetry for export](metrics-diagnostic-telemetry-logging-streaming-export-configure.md#diagnostic-telemetry-for-export)
+\* For solutions requiring low latency monitoring, Azure SQL Analytics (preview) is not recommended.
-SQL Server has its own monitoring and diagnostic capabilities that SQL Database and SQL Managed Instance leverage, such as [query store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store) and [dynamic management views (DMVs)](/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views). See [Monitoring using DMVs](monitoring-with-dmvs.md) for scripts to monitor for a variety of performance issues.
+## Database advisors in the Azure portal
-## Monitoring and tuning capabilities in the Azure portal
+Azure SQL Database provides a number of Database Advisors to provide intelligent performance tuning recommendations and automatic tuning options to improve performance.
+
+Additionally, the [Query Performance Insight](query-performance-insight-use.md) page shows you details about the queries responsible for the most CPU and IO usage for single and pooled databases.
+
+ - Query Performance Insight is available in the Azure portal in the Overview pane of your Azure SQL Database under "Intelligent Performance". Use the automatically collected information to identify queries and begin optimizing your workload performance.
+ - You can also configure [automatic tuning](automatic-tuning-overview.md) to implement these recommendations automatically, such as forcing a query execution plan to prevent regression, or creating and dropping nonclustered indexes based on workload patterns. Automatic tuning also is available in the Azure portal in the Overview pane of your Azure SQL Database under "Intelligent Performance".
+
+Azure SQL Database and Azure SQL Managed Instance provide advanced monitoring and tuning capabilities backed by artificial intelligence to assist you in troubleshooting and maximizing the performance of your databases and solutions. You can choose to configure the [streaming export](metrics-diagnostic-telemetry-logging-streaming-export-configure.md#diagnostic-telemetry-for-export) of these [Intelligent Insights](intelligent-insights-overview.md) and other database resource logs and metrics to one of several destinations for consumption and analysis.
+
+Outside of the Azure portal, the database engine has its own monitoring and diagnostic capabilities that Azure SQL Database and SQL Managed Instance leverage, such as [query store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store) and [dynamic management views (DMVs)](/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views). See [Monitoring using DMVs](monitoring-with-dmvs.md) for scripts to monitor for a variety of performance issues in Azure SQL Database and Azure SQL Managed Instance.
+
+### Azure SQL Insights (preview) and Azure SQL Analytics (preview)
+
+Both offerings use different pipelines to present data to a variety of endpoints for coming Azure SQL Database metrics.
+
+- [Azure SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md) is project inside Azure Monitor that can provide advanced insights into Azure SQL database activity. It is deployed via a customer-managed VM using Telegraf as a collection agent that connects to SQL sources, collects data, and moves data into Log Analytics.
+
+- [Azure SQL Analytics (preview)](../../azure-monitor/insights/azure-sql.md) also requires Log Analytics to provide advanced insights into Azure SQL database activity.
+
+- Azure diagnostic telemetry is a separate, streaming source of data for Azure SQL Database and Azure SQL Managed Instance. Not to be confused with the Azure SQL Insights (preview) product, SQLInsights is a log inside Intelligent Insights, and is one of several packages of telemetry emitted by Azure diagnostic settings. Diagnostic settings are a feature that contains Resource Log categories (formerly known as Diagnostic Logs). For more information, see [Diagnostic telemetry for export](metrics-diagnostic-telemetry-logging-streaming-export-configure.md?tabs=azure-portal#diagnostic-telemetry-for-export).
+ - Azure SQL Analytics (preview) consumes the resource logs coming from the diagnostic telemetry (configurable under **Diagnostic Settings** in the Azure portal), while Azure SQL Insights (preview) uses a different pipeline to collect Azure SQL telemetry.
+
+### Monitoring and diagnostic telemetry
+
+The following diagram details all the database engine, platform metrics, resource logs, and Azure activity logs generated by Azure SQL products, how they are processed, and how they can be surfaced for analysis.
++
+## Monitor and tune Azure SQL in the Azure portal
In the Azure portal, Azure SQL Database and Azure SQL Managed Instance provide monitoring of resource metrics. Azure SQL Database provides database advisors, and Query Performance Insight provides query tuning recommendations and query performance analysis. In the Azure portal, you can enable automatic tuning for [logical SQL servers](logical-servers.md) and their single and pooled databases. > [!NOTE] > Databases with extremely low usage may show in the portal with less than actual usage. Due to the way telemetry is emitted when converting a double value to the nearest integer certain usage amounts less than 0.5 will be rounded to 0 which causes a loss in granularity of the emitted telemetry. For details, see [Low database and elastic pool metrics rounding to zero](#low-database-and-elastic-pool-metrics-rounding-to-zero).
-### Monitor with SQL insights
-
-[Azure Monitor SQL insights](../../azure-monitor/insights/sql-insights-overview.md) is a tool for monitoring Azure SQL managed instances, Azure SQL databases, and SQL Server instances in Azure SQL VMs. This service uses a remote agent to capture data from dynamic management views (DMVs) and routes the data to Azure Log Analytics, where it can be monitored and analyzed. You can view this data from [Azure Monitor](../../azure-monitor/overview.md) in provided views, or access the Log data directly to run queries and analyze trends. To start using Azure Monitor SQL insights, see [Enable SQL insights](../../azure-monitor/insights/sql-insights-enable.md).
- ### Azure SQL Database and Azure SQL Managed Instance resource monitoring You can quickly monitor a variety of resource metrics in the Azure portal in the **Metrics** view. These metrics enable you to see if a database is reaching 100% of processor, memory, or IO resources. High DTU or processor percentage, as well as high IO percentage, indicates that your workload might need more CPU or IO resources. It might also indicate queries that need to be optimized.
Affected elastic pool metrics:
## Generate intelligent assessments of performance issues
-[Intelligent Insights](intelligent-insights-overview.md) for Azure SQL Database and Azure SQL Managed Instance uses built-in intelligence to continuously monitor database usage through artificial intelligence and detect disruptive events that cause poor performance. Intelligent Insights automatically detects performance issues with databases based on query execution wait times, errors, or time-outs. Once detected, a detailed analysis is performed that generates a resource log (called SQLInsights) with an [intelligent assessment of the issues](intelligent-insights-troubleshoot-performance.md). This assessment consists of a root cause analysis of the database performance issue and, where possible, recommendations for performance improvements.
+[Intelligent Insights](intelligent-insights-overview.md) for Azure SQL Database and Azure SQL Managed Instance uses built-in intelligence to continuously monitor database usage through artificial intelligence and detect disruptive events that cause poor performance. Intelligent Insights automatically detects performance issues with databases based on query execution wait times, errors, or time-outs. Once detected, a detailed analysis is performed by Intelligent Insights that generates a resource log called SQLInsights (unrelated to the [Azure Monitor SQL Insights (preview)](../../azure-sql/database/monitoring-sql-database-azure-monitor.md)). SQLInsights is an [intelligent assessment of the issues](intelligent-insights-troubleshoot-performance.md). This assessment consists of a root cause analysis of the database performance issue and, where possible, recommendations for performance improvements.
Intelligent Insights is a unique capability of Azure built-in intelligence that provides the following value:
Intelligent Insights is a unique capability of Azure built-in intelligence that
## Enable the streaming export of metrics and resource logs
-You can enable and configure the [streaming export of diagnostic telemetry](metrics-diagnostic-telemetry-logging-streaming-export-configure.md) to one of several destinations, including the Intelligent Insights resource log. Use [SQL Analytics](../../azure-monitor/insights/azure-sql.md) and other capabilities to consume this additional diagnostic telemetry to identify and resolve performance problems.
+You can enable and configure the [streaming export of diagnostic telemetry](metrics-diagnostic-telemetry-logging-streaming-export-configure.md#diagnostic-telemetry-for-export) to one of several destinations, including the Intelligent Insights resource log.
You configure diagnostic settings to stream categories of metrics and resource logs for single databases, pooled databases, elastic pools, managed instances, and instance databases to one of the following Azure resources. ### Log Analytics workspace in Azure Monitor
-You can stream metrics and resource logs to a [Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace). Data streamed here can be consumed by [SQL Analytics](../../azure-monitor/insights/azure-sql.md), which is a cloud only monitoring solution that provides intelligent monitoring of your databases that includes performance reports, alerts, and mitigation recommendations. Data streamed to a Log Analytics workspace can be analyzed with other monitoring data collected and also enables you to leverage other Azure Monitor features such as alerts and visualizations.
+You can stream metrics and resource logs to a [Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace). Data streamed here can be consumed by [SQL Analytics (preview)](../../azure-monitor/insights/azure-sql.md), which is a cloud only monitoring solution that provides intelligent monitoring of your databases that includes performance reports, alerts, and mitigation recommendations. Data streamed to a Log Analytics workspace can be analyzed with other monitoring data collected and also enables you to leverage other Azure Monitor features such as alerts and visualizations.
+
+> [!NOTE]
+> Azure SQL Analytics (preview) is an integration with Azure Monitor, where many monitoring solutions are no longer in active development. [Monitor your SQL deployments with SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md).
### Azure Event Hubs
You can stream metrics and resource logs to [Azure Event Hubs](../../azure-monit
Stream metrics and resource logs to [Azure Storage](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage). Use Azure storage to archive vast amounts of diagnostic telemetry for a fraction of the cost of the previous two streaming options.
-## Use extended events
+## Use Extended Events
-Additionally, you can use [extended events](/sql/relational-databases/extended-events/extended-events) in SQL Server for advanced monitoring and troubleshooting. The extended events architecture enables users to collect as much or as little data as is necessary to troubleshoot or identify a performance problem. For information about using extended events in Azure SQL Database, see [Extended events in Azure SQL Database](xevent-db-diff-from-svr.md).
+Additionally, you can use [Extended Events](/sql/relational-databases/extended-events/extended-events) for advanced monitoring and troubleshooting in SQL Server, Azure SQL Database, and Azure SQL Managed Instance. Extended Events is a "tracing" tool and event architecture, superior to SQL Trace, that enables users to collect as much or as little data as is necessary to troubleshoot or identify a performance problem, while mitigating impact to ongoing application performance. Extended Events replace deprecated SQL Trace and SQL Server Profiler features. For information about using extended events in Azure SQL Database, see [Extended events in Azure SQL Database](xevent-db-diff-from-svr.md). In Azure SQL Database and SQL Managed Instance, use an [Event File target hosted in Azure Blob Storage](xevent-code-event-file.md).
## Next steps - For more information about intelligent performance recommendations for single and pooled databases, see [Database advisor performance recommendations](database-advisor-implement-performance-recommendations.md).-- For more information about automatically monitoring database performance with automated diagnostics and root cause analysis of performance issues, see [Azure SQL Intelligent Insights](intelligent-insights-overview.md).
+- For more information about automatically monitoring database performance with automated diagnostics and root cause analysis of performance issues, see [Azure SQL Intelligent Insights](intelligent-insights-overview.md).
+- [Monitor your SQL deployments with SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md)
azure-sql Monitoring Sql Database Azure Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/monitoring-sql-database-azure-monitor-reference.md
+
+ Title: Monitoring Azure SQL Database with Azure Monitor reference
+description: Important reference material needed when you monitor Azure SQL Database with Azure Monitor
+++++++ Last updated : 03/14/2022++
+# Monitoring Azure SQL Database data reference
+
+This article contains reference for monitoring Azure SQL Database with Azure Monitor. See [Monitoring Azure SQL Database](monitoring-sql-database-azure-monitor.md) for details on collecting and analyzing monitoring data for Azure SQL Database with Azure Monitor SQL Insights (preview).
+
+## Metrics
+
+For more on using Azure Monitor SQL Insights (preview) for all products in the [Azure SQL family](../../azure-sql/index.yml), see [Monitor your SQL deployments with SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md).
+
+For data specific to Azure SQL Database, see [Data for Azure SQL Database](../../azure-monitor/insights/sql-insights-overview.md#data-for-azure-sql-database).
+
+For a complete list of metrics, see:
+- [Microsoft.Sql/servers/databases](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserversdatabases)
+- [Microsoft.Sql/managedInstances](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlmanagedinstances)
+- [Microsoft.Sql/servers/elasticPools](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools)
+
+## Resource logs
+
+This section lists the types of resource logs you can collect for Azure SQL Database.
+
+For reference, see a list of [all resource logs category types supported in Azure Monitor](../../azure-monitor/essentials/resource-logs-schema.md).
+
+For a reference of resource log types collected for Azure SQL Database, see [Streaming export of Azure SQL Database Diagnostic telemetry for export](metrics-diagnostic-telemetry-logging-streaming-export-configure.md#diagnostic-telemetry-for-export)
+
+## Azure Monitor Logs tables
+
+This section refers to all of the Azure Monitor Logs tables relevant to Azure SQL Database and available for query by Log Analytics, which can be queried with KQL.
+
+Tables for all resources types are referenced here, for example, [Azure Monitor tables for SQL Databases](/azure/azure-monitor/reference/tables/tables-resourcetype.md#sql-databases).
+
+|Resource Type | Notes |
+|-|--|
+| [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity.md) | Entries from the Azure Activity log that provides insight into any subscription-level or management group level events that have occurred in Azure. |
+| [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics.md) | Azure Diagnostics reveals diagnostic data of specific resources and features for numerous Azure products including SQL databases, SQL elastic pools, and SQL managed instances. For more information, see [Diagnostics metrics]( metrics-diagnostic-telemetry-logging-streaming-export-configure.md?tabs=azure-portal#basic-metrics).|
+| [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics.md) | Metric data emitted by Azure services that measure their health and performance. Activity from Azure products including SQL databases, SQL elastic pools, and SQL managed instances.|
+
+## Activity log
+
+The Activity log contains records of management operations performed on your Azure SQL Database resources. All maintenance operations related to Azure SQL Database that have been implemented here may appear in the Activity log.
+
+For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+
+## Next steps
+
+- See [Monitoring Azure SQL Database with Azure Monitor](monitoring-sql-database-azure-monitor.md) for a description of monitoring Azure SQL Database.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
azure-sql Monitoring Sql Database Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/monitoring-sql-database-azure-monitor.md
+
+ Title: Monitoring Azure SQL Database with Azure Monitor
+description: Start here to learn how to monitor Azure SQL Database with Azure Monitor
+++++++ Last updated : 12/07/2021++
+# Monitor Azure SQL Database with Azure Monitor
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by Azure SQL Database. Azure SQL Database can be monitored by [Azure Monitor](../../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md).
+
+## Monitoring overview page in Azure portal
+
+View your Azure Monitor metrics for all connected resources by going to the Azure Monitor page directly in the Azure Portal. Or, on the **Overview** page of an Azure SQL DB, click on **Metrics** under the Monitoring heading to reach Azure Monitor.
+
+## Azure Monitor SQL Insights (preview)
+
+Some services in Azure have a focused, pre-built monitoring dashboard in the Azure portal that can be enabled to provide a starting point for monitoring your service. These special dashboards are called "insights" and are not enabled by default. For more on using Azure Monitor SQL Insights for all products in the [Azure SQL family](../../azure-sql/index.yml), see [Monitor your SQL deployments with SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md).
+
+After creating a monitoring profile, you can configure your Azure Monitor SQL Insights for SQL-specific metrics for Azure SQL Database, SQL Managed Instance, and Azure VMs running SQL Server.
+
+> [!NOTE]
+> Azure SQL Analytics (preview) is an integration with Azure Monitor, where many monitoring solutions are no longer in active development. For more monitoring options, see [Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance](../../azure-sql/database/monitor-tune-overview.md).
+
+## Monitoring data
+
+Azure SQL Database collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../../azure-monitor/essentials/monitor-azure-resource.md).
+
+See [Monitoring Azure SQL Database with Azure Monitor reference](monitoring-sql-database-azure-monitor-reference.md) for detailed information on the metrics and logs metrics created by Azure SQL Database.
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations. Resource logs were previously referred to as diagnostic logs.
+
+Diagnostic settings available include:
+
+- **log**: SQLInsights, AutomaticTuning, QueryStoreRuntimeStatistics, QueryStoreWaitStatistics, Errors, DatabaseWaitStatistics, Timeouts, Blocks, Deadlocks
+- **metric**: All Azure Monitor metrics in the **Basic** and **InstanceAndAppAdvanced** categories
+- **destination details**: Send to Log Analytics workspace, Archive to a storage account, Stream to an event hub, Send to partner solution
+ - For more information on these options, see [Create diagnostic settings in Azure portal](../../azure-monitor/essentials/diagnostic-settings.md#create-in-azure-portal).
+
+For more information on the resource logs and diagnostics available, see [Diagnostic telemetry for export](metrics-diagnostic-telemetry-logging-streaming-export-configure.md?tabs=azure-portal#diagnostic-telemetry-for-export).
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure SQL Database are listed in [Azure SQL Database monitoring data reference](monitoring-sql-database-azure-monitor-reference.md#resource-logs).
+
+## Analyzing metrics
+
+You can analyze metrics for Azure SQL Database with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+
+For a list of the platform metrics collected for Azure SQL Database, see [Monitoring Azure SQL Database data reference metrics](monitoring-sql-database-azure-monitor-reference.md#metrics)
+
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](../../azure-monitor/essentials/metrics-supported.md).
+
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. This data is optionally collected via Diagnostic settings.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../azure-monitor/essentials/resource-logs-schema.md).
+
+The [Activity log](../../azure-monitor/essentials/activity-log.md) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+For a list of the types of resource logs collected for Azure SQL Database, see [Monitoring Azure SQL Database data reference](monitoring-sql-database-azure-monitor-reference.md#resource-logs).
+
+For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure SQL Database data reference](monitoring-sql-database-azure-monitor-reference.md#azure-monitor-logs-tables).
+
+### Sample Kusto queries
+
+> [!IMPORTANT]
+> When you select **Logs** from the Monitoring menu of an Azure SQL database, Log Analytics is opened with the query scope set to the current database. This means that log queries will only include data from that resource. If you want to run a query that includes data from other databases or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../../azure-monitor/logs/scope.md) for details.
+
+Following are queries that you can use to help you monitor your database. You may see different options available depending on your purchase model.
+
+Example A: **Log_write_percent** from the past hour
+
+```Kusto
+AzureMetrics
+| where ResourceProvider == "MICROSOFT.SQL"
+| where TimeGenerated >= ago(60min)
+| where MetricName in ('log_write_percent')
+| parse _ResourceId with * "/microsoft.sql/servers/" Resource
+| summarize Log_Maximum_last60mins = max(Maximum), Log_Minimum_last60mins = min(Minimum), Log_Average_last60mins = avg(Average) by Resource, MetricName
+```
+
+Example B: **SQL Server wait types** from the past 15 minutes
+
+```Kusto
+AzureDiagnostics
+| where ResourceProvider == "MICROSOFT.SQL"
+| where TimeGenerated >= ago(15min)
+| parse _ResourceId with * "/microsoft.sql/servers/" LogicalServerName "/databases/" DatabaseName
+| summarize Total_count_15mins = sum(delta_waiting_tasks_count_d) by LogicalServerName, DatabaseName, wait_type_s
+```
+
+Example C: **SQL Server deadlocks** from the past 60 minutes
+
+```Kusto
+AzureMetrics
+| where ResourceProvider == "MICROSOFT.SQL"
+| where TimeGenerated >= ago(60min)
+| where MetricName in ('deadlock')
+| parse _ResourceId with * "/microsoft.sql/servers/" Resource
+| summarize Deadlock_max_60Mins = max(Maximum) by Resource, MetricName
+```
+
+Example D: **Avg CPU usage** from the past hour
+
+```Kusto
+AzureMetrics
+| where ResourceProvider == "MICROSOFT.SQL"
+| where TimeGenerated >= ago(60min)
+| where MetricName in ('cpu_percent')
+| parse _ResourceId with * "/microsoft.sql/servers/" Resource
+| summarize CPU_Maximum_last60mins = max(Maximum), CPU_Minimum_last60mins = min(Minimum), CPU_Average_last60mins = avg(Average) by Resource, MetricName
+```
+
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. These metrics in Azure Monitor are always collected. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../..//azure-monitor/alerts/alerts-metric-overview.md), [logs](../../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../../azure-monitor/alerts/activity-log-alerts.md).
+
+If you are creating or running an application in Azure, [Azure Monitor Application Insights](../../azure-monitor/overview.md#application-insights) may offer additional types of alerts.
+
+The following table lists common and recommended alert rules for Azure SQL Database. You may see different options available depending on your purchase model.
+
+| Signal name | Operator | Aggregation type | Threshold value | Description |
+|:|:|:|:|:|
+| DTU Percentage | Greater than | Average | 80 | Whenever the average DTU percentage is greater than 80% |
+| Log IO percentage | Greater than | Average | 80 | Whenever the average log io percentage is greater than 80% |
+| Deadlocks\* | Greater than | Count | 1 | Whenever the count of deadlocks is greater than 1. |
+| CPU percentage | Greater than | Average | 80 | Whenever the average cpu percentage is greater than 80% |
+
+\* Alerting on deadlocks may be unnecessary and noisy in some applications where deadlocks are expected and properly handled.
+
+## Next steps
+
+- See [Monitoring Azure SQL Database data reference](monitoring-sql-database-azure-monitor-reference.md) for a reference of the metrics, logs, and other important values created by Azure SQL Database.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resource) for details on monitoring Azure resources.
azure-sql Monitoring With Dmvs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/monitoring-with-dmvs.md
Previously updated : 03/15/2021 Last updated : 04/11/2022 # Monitoring Microsoft Azure SQL Database and Azure SQL Managed Instance performance using dynamic management views [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
Microsoft Azure SQL Database and Azure SQL Managed Instance partially support th
For detailed information on dynamic management views, see [Dynamic Management Views and Functions (Transact-SQL)](/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views).
-## Monitor with SQL insights
+## Monitor with SQL Insights (preview)
+
+[Azure Monitor SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md) is a tool for monitoring Azure SQL managed instances, databases in Azure SQL Database, and SQL Server instances in Azure SQL VMs. This service uses a remote agent to capture data from dynamic management views (DMVs) and routes the data to Azure Log Analytics, where it can be monitored and analyzed. You can view this data from [Azure Monitor](../../azure-monitor/overview.md) in provided views, or access the Log data directly to run queries and analyze trends. To start using Azure Monitor SQL Insights (preview), see [Enable SQL Insights (preview)](../../azure-monitor/insights/sql-insights-enable.md).
-[Azure Monitor SQL insights](../../azure-monitor/insights/sql-insights-overview.md) is a tool for monitoring managed instances, databases in Azure SQL Database, and SQL Server instances in Azure SQL VMs. This service uses a remote agent to capture data from dynamic management views (DMVs) and routes the data to Azure Log Analytics, where it can be monitored and analyzed. You can view this data from [Azure Monitor](../../azure-monitor/overview.md) in provided views, or access the Log data directly to run queries and analyze trends. To start using Azure Monitor SQL insights, see [Enable SQL insights](../../azure-monitor/insights/sql-insights-enable.md).
## Permissions
azure-sql Doc Changes Updates Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/doc-changes-updates-known-issues.md
If an instance participates in an [auto-failover group](../database/auto-failove
### Procedure sp_send_dbmail may transiently fail when @query parameter is used
-Procedure `sp_send_dbmail` may transiently fail when `@query` parameter is used. When this issue occurs, every second execution of procedure sp_send_dbmail fails with error `Msg 22050, Level 16, State 1` and message `Failed to initialize sqlcmd library with error number -2147467259`. To be able to see this error properly, the procedure should be called with default value 0 for the parameter `@exclude_query_output`, otherwise the error will not be propagated.
+Procedure `sp_send_dbmail` may transiently fail when `@query` parameter is used. When this issue occurs, every second execution of procedure `sp_send_dbmail` fails with error `Msg 22050, Level 16, State 1` and message `Failed to initialize sqlcmd library with error number -2147467259`. To be able to see this error properly, the procedure should be called with default value 0 for the parameter `@exclude_query_output`, otherwise the error will not be propagated.
+ This problem is caused by a known bug related to how `sp_send_dbmail` is using impersonation and connection pooling.+ To work around this issue wrap code for sending email into a retry logic that relies on output parameter `@mailitem_id`. If the execution fails, then parameter value will be NULL, indicating `sp_send_dbmail` should be called one more time to successfully send an email. Here is an example this retry logic. ```sql
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/doc-changes-updates-release-notes-whats-new.md
The following table lists the features of Azure SQL Managed Instance that are cu
| [Premium-series hardware](resource-limits.md#service-tier-characteristics) | Deploy your SQL Managed Instance to the new premium-series hardware to take advantage of the latest Intel Ice Lake CPUs. | | [Query Store hints](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-mi-current&preserve-view=true) | Use query hints to optimize your query execution via the OPTION clause. | | [Service Broker cross-instance message exchange](/sql/database-engine/configure-windows/sql-server-service-broker) | Support for cross-instance message exchange using Service Broker on Azure SQL Managed Instance. |
-| [SQL insights](../../azure-monitor/insights/sql-insights-overview.md) | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. |
+| [SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md) | SQL Insights (preview) is a comprehensive solution for monitoring any product in the Azure SQL family. SQL Insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. |
| [Transactional Replication](replication-transactional-overview.md) | Replicate the changes from your tables into other databases in SQL Managed Instance, SQL Database, or SQL Server. Or update your tables when some rows are changed in other instances of SQL Managed Instance or SQL Server. For information, see [Configure replication in Azure SQL Managed Instance](replication-between-two-instances-configure-tutorial.md). | | [Threat detection](threat-detection-configure.md) | Threat detection notifies you of security threats detected to your database. | | [Windows Auth for Azure Active Directory principals](winauth-azuread-overview.md) | Kerberos authentication for Azure Active Directory (Azure AD) enables Windows Authentication access to Azure SQL Managed Instance. |
Learn about significant changes to the Azure SQL Managed Instance documentation.
| **Machine Learning Services GA** | The Machine Learning Services for Azure SQL Managed Instance are now generally available (GA). To learn more, see [Machine Learning Services for SQL Managed Instance](machine-learning-services-overview.md).| | **Maintenance window** | The maintenance window feature allows you to configure a maintenance schedule for your Azure SQL Managed Instance. To learn more, see [maintenance window](../database/maintenance-window.md).| | **Service Broker message exchange** | The Service Broker component of Azure SQL Managed Instance allows you to compose your applications from independent, self-contained services, by providing native support for reliable and secure message exchange between the databases attached to the service. Currently in preview. To learn more, see [Service Broker](/sql/database-engine/configure-windows/sql-server-service-broker).
-| **SQL insights** | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. To learn more, see [SQL insights](../../azure-monitor/insights/sql-insights-overview.md). |
-
+| **SQL Insights (preview)** | SQL Insights (preview) is a comprehensive solution for monitoring any product in the Azure SQL family. SQL Insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. To learn more, see [Azure Monitor SQL Insights (preview)](../../azure-monitor/insights/sql-insights-overview.md). |
### 2020
The following changes were added to SQL Managed Instance and the documentation i
| **Major performance improvements** | Introducing improvements to SQL Managed Instance performance, including improved transaction log write throughput, improved data and log IOPS for business critical instances, and improved TempDB performance. See the [improved performance](https://techcommunity.microsoft.com/t5/azure-sql/announcing-major-performance-improvements-for-azure-sql-database/ba-p/1701256) tech community blog to learn more. | **Enhanced management experience** | Using the new [OPERATIONS API](/rest/api/sql/2021-02-01-preview/managed-instance-operations), it's now possible to check the progress of long-running instance operations. To learn more, see [Management operations](management-operations-overview.md?tabs=azure-portal). | **Machine learning support** | Machine Learning Services with support for R and Python languages now include preview support on Azure SQL Managed Instance (Preview). To learn more, see [Machine learning with SQL Managed Instance](machine-learning-services-overview.md). |
-| **User-initiated failover** | User-initiated failover is now generally available, providing you with the capability to manually initiate an automatic failover using PowerShell, CLI commands, and API calls, improving application resiliency. To learn more, see, [testing resiliency](../database/high-availability-sla.md#testing-application-fault-resiliency).
--
+| **User-initiated failover** | User-initiated failover is now generally available, providing you with the capability to manually initiate an automatic failover using PowerShell, CLI commands, and API calls, improving application resiliency. To learn more, see, [testing resiliency](../database/high-availability-sla.md#testing-application-fault-resiliency). |
## Known issues
azure-sql Job Automation Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/job-automation-managed-instance.md
Previously updated : 02/23/2022 Last updated : 04/19/2022 # Automate management tasks using SQL Agent jobs in Azure SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
GO
RECONFIGURE ```
-As an example exercise, set up the email account that will be used to send the email notifications. Assign the account to the email profile called `AzureManagedInstance_dbmail_profile`. To send e-mail using SQL Agent jobs in SQL Managed Instance, there should be a profile that must be called `AzureManagedInstance_dbmail_profile`. Otherwise, SQL Managed Instance will be unable to send emails via SQL Agent. See the following sample:
+As an example exercise, set up the email account that will be used to send the email notifications. Assign the account to the email profile called `AzureManagedInstance_dbmail_profile`. To send e-mail using SQL Agent jobs in SQL Managed Instance, there should be a profile that must be called `AzureManagedInstance_dbmail_profile`. Otherwise, SQL Managed Instance will be unable to send emails via SQL Agent.
+
+> [!NOTE]
+> For the mail server, we recommend you use authenticated SMTP relay services to send email. These relay services typically connect through TCP ports 25 or 587 for connections over TLS, or port 465 for SSL connections, however Database Mail can be configured to use any port. These ports require a new outbound rule in your managed instance's network security group. These services are used to maintain IP and domain reputation to minimize the possibility that external domains reject your messages or put them to the SPAM folder. Consider an authenticated SMTP relay service already in your on-premises servers. In Azure, [SendGrid](https://sendgrid.com/partners/azure/) is one such SMTP relay service, but there are others.
+
+Use the following sample script to create a Database Mail account and profile, then associate them together:
```sql -- Create a Database Mail account
EXEC msdb.dbo.sp_add_operator
Confirm the email's success or failure via the [Database Mail Log](/sql/relational-databases/database-mail/database-mail-log-and-audits) in SSMS.
-You can then [modify any SQL Agent job](/sql/relational-databases/system-stored-procedures/sp-update-job-transact-sql) and assign operators that will be notified via email if the job completes, fails, or succeeds using SSMS or the following Transact-SQL script:
+You can then [modify any SQL Agent job](/sql/relational-databases/system-stored-procedures/sp-update-job-transact-sql) and assign operators that will be notified via email if the job completes, fails, or succeeds using SSMS or the following T-SQL script:
```sql EXEC msdb.dbo.sp_update_job @job_name=N'Load data using SSIS',
For more information, see [View SQL Agent job history](/sql/ssms/agent/view-the-
### SQL Agent fixed database role membership
-If users linked to non-sysadmin logins are added to any of the three SQL Agent fixed database roles in the msdb system database, there exists an issue in which explicit EXECUTE permissions need to be granted to three system stored procedures in the master database. If this issue is encountered, the error message "The EXECUTE permission was denied on the object <object_name> (Microsoft SQL Server, Error: 229)" will be shown.
+If users linked to non-sysadmin logins are added to any of the three SQL Agent fixed database roles in the `msdb` system database, there exists an issue in which explicit EXECUTE permissions need to be granted to three system stored procedures in the master database. If this issue is encountered, the error message "The EXECUTE permission was denied on the object <object_name> (Microsoft SQL Server, Error: 229)" will be shown.
-Once you add users to a SQL Agent fixed database role (SQLAgentUserRole, SQLAgentReaderRole, or SQLAgentOperatorRole) in msdb, for each of the user's logins added to these roles, execute the below T-SQL script to explicitly grant EXECUTE permissions to the system stored procedures listed. This example assumes that the user name and login name are the same:
+Once you add users to a SQL Agent fixed database role (SQLAgentUserRole, SQLAgentReaderRole, or SQLAgentOperatorRole) in `msdb`, for each of the user's logins added to these roles, execute the below T-SQL script to explicitly grant EXECUTE permissions to the system stored procedures listed. This example assumes that the user name and login name are the same:
```sql USE [master]
GRANT EXECUTE ON master.dbo.xp_sqlagent_notify TO [login_name];
- [What's new in Azure SQL Managed Instance?](doc-changes-updates-release-notes-whats-new.md) - [Azure SQL Managed Instance T-SQL differences from SQL Server](../../azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md#sql-server-agent) - [Features comparison: Azure SQL Database and Azure SQL Managed Instance](../../azure-sql/database/features-comparison.md)+
+## Next steps
+
+- [Configure Database Mail](/sql/relational-databases/database-mail/configure-database-mail)
+- [Troubleshoot outbound SMTP connectivity problems in Azure](/azure/virtual-network/troubleshoot-outbound-smtp-connectivity)
azure-sql Transact Sql Tsql Differences Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md
Previously updated : 10/21/2021 Last updated : 04/19/2022
Temporary known issues that are discovered in SQL Managed Instance and will be r
### Backup
-SQL Managed Instance has automatic backups, so users can create full database `COPY_ONLY` backups. Differential, log, and file snapshot backups aren't supported.
+Azure SQL Managed Instance has automatic backups, so users can create full database `COPY_ONLY` backups. Differential, log, and file snapshot backups aren't supported.
- With a SQL Managed Instance, you can back up an instance database only to an Azure Blob storage account: - Only `BACKUP TO URL` is supported.
Limitations:
- With a SQL Managed Instance, you can back up an instance database to a backup with up to 32 stripes, which is enough for databases up to 4 TB if backup compression is used. - You can't execute `BACKUP DATABASE ... WITH COPY_ONLY` on a database that's encrypted with service-managed Transparent Data Encryption (TDE). Service-managed TDE forces backups to be encrypted with an internal TDE key. The key can't be exported, so you can't restore the backup. Use automatic backups and point-in-time restore, or use [customer-managed (BYOK) TDE](../database/transparent-data-encryption-tde-overview.md#customer-managed-transparent-data-encryptionbring-your-own-key) instead. You also can disable encryption on the database.-- Native backups taken on a Managed Instance cannot be restored to a SQL Server. This is because Managed Instance has higher internal database version compared to any version of SQL Server.
+- Native backups taken on a SQL Managed Instance cannot be restored to a SQL Server. This is because SQL Managed Instance has higher internal database version compared to any version of SQL Server.
- To backup or restore a database to/from an Azure storage, it is necessary to create a shared access signature (SAS) an URI that grants you restricted access rights to Azure Storage resources [Learn more on this](restore-sample-database-quickstart.md#restore-from-a-backup-file-using-t-sql). Using Access keys for these scenarios is not supported. - The maximum backup stripe size by using the `BACKUP` command in SQL Managed Instance is 195 GB, which is the maximum blob size. Increase the number of stripes in the backup command to reduce individual stripe size and stay within this limit.
SQL Managed Instance can't access files, so cryptographic providers can't be cre
- Setting an Azure AD login mapped to an Azure AD group as the database owner isn't supported. A member of the Azure AD group can be a database owner, even if the login hasn't been created in the database. - Impersonation of Azure AD server-level principals by using other Azure AD principals is supported, such as the [EXECUTE AS](/sql/t-sql/statements/execute-as-transact-sql) clause. EXECUTE AS limitations are:
- - EXECUTE AS USER isn't supported for Azure AD users when the name differs from the login name. An example is when the user is created through the syntax CREATE USER [myAadUser] FROM LOGIN [john@contoso.com] and impersonation is attempted through EXEC AS USER = _myAadUser_. When you create a **USER** from an Azure AD server principal (login), specify the user_name as the same login_name from **LOGIN**.
+ - EXECUTE AS USER isn't supported for Azure AD users when the name differs from the login name. An example is when the user is created through the syntax `CREATE USER [myAadUser] FROM LOGIN [john@contoso.com]` and impersonation is attempted through `EXEC AS USER = myAadUser`. When you create a **USER** from an Azure AD server principal (login), specify the user_name as the same login_name from **LOGIN**.
- Only the SQL Server-level principals (logins) that are part of the `sysadmin` role can execute the following operations that target Azure AD principals: - EXECUTE AS USER
SQL Managed Instance can't access files, so cryptographic providers can't be cre
- Export a database from SQL Managed Instance and import to SQL Database within the same Azure AD domain. - Export a database from SQL Database and import to SQL Managed Instance within the same Azure AD domain. - Export a database from SQL Managed Instance and import to SQL Server (version 2012 or later).
- - In this configuration, all Azure AD users are created as SQL Server database principals (users) without logins. The type of users is listed as `SQL` and is visible as `SQL_USER` in sys.database_principals). Their permissions and roles remain in the SQL Server database metadata and can be used for impersonation. However, they cannot be used to access and log in to the SQL Server using their credentials.
+ - In this configuration, all Azure AD users are created as SQL Server database principals (users) without logins. The type of users is listed as `SQL` and is visible as `SQL_USER` in `sys.database_principals`). Their permissions and roles remain in the SQL Server database metadata and can be used for impersonation. However, they cannot be used to access and sign in to the SQL Server using their credentials.
- Only the server-level principal login, which is created by the SQL Managed Instance provisioning process, members of the server roles, such as `securityadmin` or `sysadmin`, or other logins with ALTER ANY LOGIN permission at the server level can create Azure AD server principals (logins) in the master database for SQL Managed Instance. - If the login is a SQL principal, only logins that are part of the `sysadmin` role can use the create command to create logins for an Azure AD account. - The Azure AD login must be a member of an Azure AD within the same directory that's used for Azure SQL Managed Instance. - Azure AD server principals (logins) are visible in Object Explorer starting with SQL Server Management Studio 18.0 preview 5.-- A server principal with *sysadmin* access level is automatically created for the Azure AD admin account once itΓÇÖs enabled on an instance.
+- A server principal with *sysadmin* access level is automatically created for the Azure AD admin account once it's enabled on an instance.
- During authentication, the following sequence is applied to resolve the authenticating principal:
- 1. If the Azure AD account exists as directly mapped to the Azure AD server principal (login), which is present in sys.server_principals as type "E," grant access and apply permissions of the Azure AD server principal (login).
- 1. If the Azure AD account is a member of an Azure AD group that's mapped to the Azure AD server principal (login), which is present in sys.server_principals as type "X," grant access and apply permissions of the Azure AD group login.
- 1. If the Azure AD account exists as directly mapped to an Azure AD user in a database, which is present in sys.database_principals as type "E," grant access and apply permissions of the Azure AD database user.
- 1. If the Azure AD account is a member of an Azure AD group that's mapped to an Azure AD user in a database, which is present in sys.database_principals as type "X," grant access and apply permissions of the Azure AD group user.
+ 1. If the Azure AD account exists as directly mapped to the Azure AD server principal (login), which is present in `sys.server_principals` as type "E," grant access and apply permissions of the Azure AD server principal (login).
+ 1. If the Azure AD account is a member of an Azure AD group that's mapped to the Azure AD server principal (login), which is present in `sys.server_principals` as type "X," grant access and apply permissions of the Azure AD group login.
+ 1. If the Azure AD account exists as directly mapped to an Azure AD user in a database, which is present in `sys.database_principals` as type "E," grant access and apply permissions of the Azure AD database user.
+ 1. If the Azure AD account is a member of an Azure AD group that's mapped to an Azure AD user in a database, which is present in `sys.database_principals` as type "X," grant access and apply permissions of the Azure AD group user.
### Service key and service master key
For more information, see [ALTER DATABASE](/sql/t-sql/statements/alter-database-
- Alerts aren't yet supported. - Proxies aren't supported. - EventLog isn't supported.-- User must be directly mapped to Azure AD server principal (login) to create, modify, or execute SQL Agent jobs. Users that are not directly mapped, for example, users that belong to an Azure AD group that has the rights to create, modify or execute SQL Agent jobs, will not effectively be able to perform those actions. This is due to Managed Instance impersonation and [EXECUTE AS limitations](#logins-and-users).
+- User must be directly mapped to Azure AD server principal (login) to create, modify, or execute SQL Agent jobs. Users that are not directly mapped, for example, users that belong to an Azure AD group that has the rights to create, modify or execute SQL Agent jobs, will not effectively be able to perform those actions. This is due to SQL Managed Instance impersonation and [EXECUTE AS limitations](#logins-and-users).
- The Multi Server Administration feature for master/target (MSX/TSX) jobs are not supported. For information about SQL Server Agent, see [SQL Server Agent](/sql/ssms/agent/sql-server-agent).
Partial support for [distributed transactions](../database/elastic-transactions-
* all transaction participants are Azure SQL Managed Instances that are part of the [Server trust group](./server-trust-group-overview.md). * transactions are initiated either from .NET (TransactionScope class) or Transact-SQL.
-Azure SQL Managed Instance currently does not support other scenarios which are regularly supported by MSDTC on-premises or in Azure Virtual Machines.
+Azure SQL Managed Instance currently does not support other scenarios that are regularly supported by MSDTC on-premises or in Azure Virtual Machines.
### Extended Events
For more information, see [FILESTREAM](/sql/relational-databases/blob/filestream
[Linked servers](/sql/relational-databases/linked-servers/linked-servers-database-engine) in SQL Managed Instance support a limited number of targets: - Supported targets are SQL Managed Instance, SQL Database, Azure Synapse SQL [serverless](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance/) and dedicated pools, and SQL Server instances. -- Distributed writable transactions are possible only among Managed Instances. For more information, see [Distributed Transactions](../database/elastic-transactions-overview.md). However, MS DTC is not supported.
+- Distributed writable transactions are possible only among SQL Managed Instances. For more information, see [Distributed Transactions](../database/elastic-transactions-overview.md). However, MS DTC is not supported.
- Targets that aren't supported are files, Analysis Services, and other RDBMS. Try to use native CSV import from Azure Blob Storage using `BULK INSERT` or `OPENROWSET` as an alternative for file import, or load files using a [serverless SQL pool in Azure Synapse Analytics](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance/). Operations: -- [Cross-instance](../database/elastic-transactions-overview.md) write transactions are supported only for Managed Instances.
+- [Cross-instance](../database/elastic-transactions-overview.md) write transactions are supported only for SQL Managed Instances.
- `sp_dropserver` is supported for dropping a linked server. See [sp_dropserver](/sql/relational-databases/system-stored-procedures/sp-dropserver-transact-sql). - The `OPENROWSET` function can be used to execute queries only on SQL Server instances. They can be either managed, on-premises, or in virtual machines. See [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql). - The `OPENDATASOURCE` function can be used to execute queries only on SQL Server instances. They can be either managed, on-premises, or in virtual machines. Only the `SQLNCLI`, `SQLNCLI11`, and `SQLOLEDB` values are supported as a provider. An example is `SELECT * FROM OPENDATASOURCE('SQLNCLI', '...').AdventureWorks2012.HumanResources.Employee`. See [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql). - Linked servers cannot be used to read files (Excel, CSV) from the network shares. Try to use [BULK INSERT](/sql/t-sql/statements/bulk-insert-transact-sql#e-importing-data-from-a-csv-file), [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql#g-accessing-data-from-a-csv-file-with-a-format-file) that reads CSV files from Azure Blob Storage, or a [linked server that references a serverless SQL pool in Synapse Analytics](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance/). Track this requests on [SQL Managed Instance Feedback item](https://feedback.azure.com/d365community/idea/db80cf6e-3425-ec11-b6e6-000d3a4f0f84)|
-Linked servers on Azure SQL Managed Instance support SQL authentication and [AAD authentication](/sql/relational-databases/linked-servers/create-linked-servers-sql-server-database-engine#linked-servers-with-azure-sql-managed-instance).
+Linked servers on Azure SQL Managed Instance support SQL authentication and [Azure AD authentication](/sql/relational-databases/linked-servers/create-linked-servers-sql-server-database-engine#linked-servers-with-azure-sql-managed-instance).
### PolyBase
Service broker is enabled by default and cannot be disabled. The following ALTER
- `Ole Automation Procedures` - `sp_execute_external_scripts` isn't supported. See [sp_execute_external_scripts](/sql/relational-databases/system-stored-procedures/sp-execute-external-script-transact-sql#examples). - `xp_cmdshell` isn't supported. See [xp_cmdshell](/sql/relational-databases/system-stored-procedures/xp-cmdshell-transact-sql).-- `Extended stored procedures` aren't supported, and this includes `sp_addextendedproc` and `sp_dropextendedproc`. This functionality won't be supported because it's on a deprecation path for SQL Server. For more details, see [Extended Stored Procedures](/sql/relational-databases/extended-stored-procedures-programming/database-engine-extended-stored-procedures-programming).
+- `Extended stored procedures` aren't supported, and this includes `sp_addextendedproc` and `sp_dropextendedproc`. This functionality won't be supported because it's on a deprecation path for SQL Server. For more information, see [Extended Stored Procedures](/sql/relational-databases/extended-stored-procedures-programming/database-engine-extended-stored-procedures-programming).
- `sp_attach_db`, `sp_attach_single_file_db`, and `sp_detach_db` aren't supported. See [sp_attach_db](/sql/relational-databases/system-stored-procedures/sp-attach-db-transact-sql), [sp_attach_single_file_db](/sql/relational-databases/system-stored-procedures/sp-attach-single-file-db-transact-sql), and [sp_detach_db](/sql/relational-databases/system-stored-procedures/sp-detach-db-transact-sql). ### System functions and variables
The following variables, functions, and views return different results:
- `SERVERPROPERTY('EngineEdition')` returns the value 8. This property uniquely identifies a SQL Managed Instance. See [SERVERPROPERTY](/sql/t-sql/functions/serverproperty-transact-sql). - `SERVERPROPERTY('InstanceName')` returns NULL because the concept of instance as it exists for SQL Server doesn't apply to SQL Managed Instance. See [SERVERPROPERTY('InstanceName')](/sql/t-sql/functions/serverproperty-transact-sql).-- `@@SERVERNAME` returns a full DNS "connectable" name, for example, my-managed-instance.wcus17662feb9ce98.database.windows.net. See [@@SERVERNAME](/sql/t-sql/functions/servername-transact-sql).
+- `@@SERVERNAME` returns a full DNS "connectable" name, for example, `my-managed-instance.wcus17662feb9ce98.database.windows.net`. See [@@SERVERNAME](/sql/t-sql/functions/servername-transact-sql).
- `SYS.SERVERS` returns a full DNS "connectable" name, such as `myinstance.domain.database.windows.net` for the properties "name" and "data_source." See [SYS.SERVERS](/sql/relational-databases/system-catalog-views/sys-servers-transact-sql). - `@@SERVICENAME` returns NULL because the concept of service as it exists for SQL Server doesn't apply to SQL Managed Instance. See [@@SERVICENAME](/sql/t-sql/functions/servicename-transact-sql).-- `SUSER_ID` is supported. It returns NULL if the Azure AD login isn't in sys.syslogins. See [SUSER_ID](/sql/t-sql/functions/suser-id-transact-sql).
+- `SUSER_ID` is supported. It returns NULL if the Azure AD login isn't in `sys.syslogins`. See [SUSER_ID](/sql/t-sql/functions/suser-id-transact-sql).
- `SUSER_SID` isn't supported. The wrong data is returned, which is a temporary known issue. See [SUSER_SID](/sql/t-sql/functions/suser-sid-transact-sql). ## <a name="Environment"></a>Environment constraints
The following variables, functions, and views return different results:
### VNET - VNet can be deployed using Resource Model - Classic Model for VNet is not supported. - After a SQL Managed Instance is created, moving the SQL Managed Instance or VNet to another resource group or subscription is not supported.-- For SQL Managed Instances hosted in virtual clusters that are created before 9/22/2020 [global peering](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) is not supported. You can connect to these resources via ExpressRoute or VNet-to-VNet through VNet Gateways.
+- For SQL Managed Instances hosted in virtual clusters that are created before September 22, 2020, [global peering](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) is not supported. You can connect to these resources via ExpressRoute or VNet-to-VNet through VNet Gateways.
### Failover groups System databases are not replicated to the secondary instance in a failover group. Therefore, scenarios that depend on objects from the system databases will be impossible on the secondary instance unless the objects are manually created on the secondary. ### TEMPDB-- The maximum file size of `tempdb` can't be greater than 24 GB per core on a General Purpose tier. The maximum `tempdb` size on a Business Critical tier is limited by the SQL Managed Instance storage size. `Tempdb` log file size is limited to 120 GB on General Purpose tier. Some queries might return an error if they need more than 24 GB per core in `tempdb` or if they produce more than 120 GB of log data.
+- The maximum file size of the `tempdb` system database can't be greater than 24 GB per core on a General Purpose tier. The maximum `tempdb` size on a Business Critical tier is limited by the SQL Managed Instance storage size. `Tempdb` log file size is limited to 120 GB on General Purpose tier. Some queries might return an error if they need more than 24 GB per core in `tempdb` or if they produce more than 120 GB of log data.
- `Tempdb` is always split into 12 data files: 1 primary, also called master, data file and 11 non-primary data files. The file structure cannot be changed and new files cannot be added to `tempdb`. - [Memory-optimized `tempdb` metadata](/sql/relational-databases/databases/tempdb-database?view=sql-server-ver15&preserve-view=true#memory-optimized-tempdb-metadata), a new SQL Server 2019 in-memory database feature, is not supported. - Objects created in the model database cannot be auto-created in `tempdb` after a restart or a failover because `tempdb` does not get its initial object list from the model database. You must create objects in `tempdb` manually after each restart or a failover. ### MSDB
-The following MSDB schemas in SQL Managed Instance must be owned by their respective predefined roles:
+The following schemas in the `msdb` system database in SQL Managed Instance must be owned by their respective predefined roles:
- General roles - TargetServersRole
azure-sql Sql Server To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md
For more migration information, see the [migration overview](sql-server-to-manag
To migrate your SQL Server to Azure SQL Managed Instance, make sure you have: - Chosen a [migration method](sql-server-to-managed-instance-overview.md#compare-migration-options) and the corresponding tools for your method.-- Install the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+- Install the [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
- Installed the [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595) on a machine that can connect to your source SQL Server. - Created a target [Azure SQL Managed Instance](../../managed-instance/instance-create-quickstart.md) - Configured connectivity and proper permissions to access both source and target.
Proceed to the following steps to assess and migrate databases to Azure SQL Mana
Determine whether SQL Managed Instance is compatible with the database requirements of your application. SQL Managed Instance is designed to provide easy lift and shift migration for most existing applications that use SQL Server. However, you may sometimes require features or capabilities that aren't yet supported and the cost of implementing a workaround is too high.
-The [Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) provides a seamless wizard based experience to assess, get Azure recommendations and migrate your SQL Server databases on-premises to SQL Server on Azure Virtual Machines. Besides, highlighting any migration blockers or warnings, the extension also includes an option for Azure recommendations to collect your databases' performance data [to recommend a right-sized Azure SQL Managed Instance](../../../dms/ads-sku-recommend.md) to meet the performance needs of your workload (with the least price).
+The [Azure SQL migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) provides a seamless wizard based experience to assess, get Azure recommendations and migrate your SQL Server databases on-premises to SQL Server on Azure Virtual Machines. Besides, highlighting any migration blockers or warnings, the extension also includes an option for Azure recommendations to collect your databases' performance data [to recommend a right-sized Azure SQL Managed Instance](../../../dms/ads-sku-recommend.md) to meet the performance needs of your workload (with the least price).
You can also use the Data Migration Assistant (version 4.1 and later) to assess databases to get:
Data Migration Assistant also supports consolidation of the assessment reports f
### Deploy to an optimally sized managed instance
-You can use the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to get right-sized Azure SQL Managed Instance recommendation. The extension collects performance data from your source SQL Server instance to provide right-sized Azure recommendation that meets your workload's performance needs with minimal cost. To learn more, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](../../../dms/ads-sku-recommend.md)
+You can use the [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to get right-sized Azure SQL Managed Instance recommendation. The extension collects performance data from your source SQL Server instance to provide right-sized Azure recommendation that meets your workload's performance needs with minimal cost. To learn more, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](../../../dms/ads-sku-recommend.md)
Based on the information in the discover and assess phase, create an appropriately sized target SQL Managed Instance. You can do so by using the [Azure portal](../../managed-instance/instance-create-quickstart.md), [PowerShell](../../managed-instance/scripts/create-configure-managed-instance-powershell.md), or an [Azure Resource Manager (ARM) Template](../../managed-instance/create-template-quickstart.md).
SQL Managed Instance is a managed service that allows you to delegate some of th
This article covers two of the recommended migration options: -- Azure SQL Migration extension for Azure Data Studio - migration with near-zero downtime.
+- Azure SQL migration extension for Azure Data Studio - migration with near-zero downtime.
- Native `RESTORE DATABASE FROM URL` - uses native backups from SQL Server and requires some downtime. This guide describes the two most popular options - Azure Database Migration Service (DMS) and native backup and restore. For other migration tools, see [Compare migration options](sql-server-to-managed-instance-overview.md#compare-migration-options).
-### Migrate using the Azure SQL Migration extension for Azure Data Studio (minimal downtime)
+### Migrate using the Azure SQL migration extension for Azure Data Studio (minimal downtime)
To perform a minimal downtime migration using Azure Data Studio, follow the high level steps below. For a detailed step-by-step tutorial, see [Migrate SQL Server to an Azure SQL Managed Instance online using Azure Data Studio](../../../dms/tutorial-sql-server-managed-instance-online-ads.md):
-1. Download and install [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) and the [Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+1. Download and install [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) and the [Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
1. Launch the Migrate to Azure SQL wizard in the extension in Azure Data Studio. 1. Select databases for assessment and view migration readiness or issues (if any). Additionally, collect performance data and get right-sized Azure recommendation. 1. Select your Azure account and your target Azure SQL Managed Instance from your subscription.
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
One of the key benefits of migrating your SQL Server databases to SQL Managed In
## Choose an appropriate target
-You can use the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to get right-sized Azure SQL Managed Instance recommendation. The extension collects performance data from your source SQL Server instance to provide right-sized Azure recommendation that meets your workload's performance needs with minimal cost. To learn more, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](../../../dms/ads-sku-recommend.md)
+You can use the [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to get right-sized Azure SQL Managed Instance recommendation. The extension collects performance data from your source SQL Server instance to provide right-sized Azure recommendation that meets your workload's performance needs with minimal cost. To learn more, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](../../../dms/ads-sku-recommend.md)
The following general guidelines can help you choose the right service tier and characteristics of SQL Managed Instance to help match your [performance baseline](sql-server-to-managed-instance-performance-baseline.md):
We recommend the following migration tools:
|Technology | Description| |||
-|[Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) | The Azure SQL Migration extension for Azure Data Studio provides both the SQL Server assessment and migration capabilities in Azure Data Studio. It supports migrations in either online (for migrations that require minimal downtime) or offline (for migrations where downtime persists through the duration of the migration) modes. |
+| [Azure SQL migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) | The Azure SQL migration extension for Azure Data Studio provides both the SQL Server assessment and migration capabilities in Azure Data Studio. It supports migrations in either online (for migrations that require minimal downtime) or offline (for migrations where downtime persists through the duration of the migration) modes. |
| [Azure Migrate](../../../migrate/how-to-create-azure-sql-assessment.md) | This Azure service helps you discover and assess your SQL data estate at scale on VMware. It provides Azure SQL deployment recommendations, target sizing, and monthly estimates. |
-|[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | This Azure service supports migration in the offline mode for applications that can afford downtime during the migration process. Unlike the continuous migration in online mode, offline mode migration runs a one-time restore of a full database backup from the source to the target. |
+|[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | This Azure service supports migration in the offline mode for applications that can afford downtime during the migration process. Unlike the continuous migration in online mode, offline mode migration runs a one-time restore of a full database backup from the source to the target. |
|[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | SQL Managed Instance supports restore of native SQL Server database backups (.bak files). It's the easiest migration option for customers who can provide full database backups to Azure Storage.| |[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | This cloud service is enabled for SQL Managed Instance based on SQL Server log-shipping technology. It's a migration option for customers who can provide full, differential, and log database backups to Azure Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance.| |[Managed Instance link](../../managed-instance/managed-instance-link-feature-overview.md) | This feature enables online migration to Managed Instance using Always On technology. ItΓÇÖs a migration option for customers who require database on Managed Instance to be accessible in R/O mode while migration is in progress, who need to keep the migration running for prolonged periods of time (weeks or months at the time), who require true online replication to Business Critical service tier, and for customers who require the most performant minimum downtime migration. |
The following table compares the recommended migration options:
|Migration option |When to use |Considerations | ||||
-|[Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) | - Migrate single databases or multiple databases at scale.<br/>- Can run in both online (minimal downtime) and offline (acceptable downtime) modes.<br/><br/>Supported sources:<br/>- SQL Server (2005 to 2019) on-premises or Azure VM<br/>- AWS EC2<br/>- AWS RDS<br/>- GCP Compute SQL Server VM | - Easy to setup and get started.<br/>- Requires setup of self-hosted integration runtime to access on-premises SQL Server and backups.<br/>- Includes both assessment and migration capabilities. |
+|[Azure SQL migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) | - Migrate single databases or multiple databases at scale.<br/>- Can run in both online (minimal downtime) and offline (acceptable downtime) modes.<br/><br/>Supported sources:<br/>- SQL Server (2005 to 2019) on-premises or Azure VM<br/>- AWS EC2<br/>- AWS RDS<br/>- GCP Compute SQL Server VM | - Easy to setup and get started.<br/>- Requires setup of self-hosted integration runtime to access on-premises SQL Server and backups.<br/>- Includes both assessment and migration capabilities. |
|[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | - Migrate single databases or multiple databases at scale.<br/>- Can accommodate downtime during the migration process.<br/><br/>Supported sources:<br/>- SQL Server (2005 to 2019) on-premises or Azure VM<br/>- AWS EC2<br/>- AWS RDS<br/>- GCP Compute SQL Server VM | - Migrations at scale can be automated via [PowerShell](../../../dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md).<br/>- Time to complete migration depends on database size and is affected by backup and restore time.<br/>- Sufficient downtime might be required. | |[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | - Migrate individual line-of-business application databases.<br/>- Quick and easy migration without a separate migration service or tool.<br/><br/>Supported sources:<br/>- SQL Server (2005 to 2019) on-premises or Azure VM<br/>- AWS EC2<br/>- AWS RDS<br/>- GCP Compute SQL Server VM | - Database backup uses multiple threads to optimize data transfer to Azure Blob Storage, but partner bandwidth and database size can affect transfer rate.<br/>- Downtime should accommodate the time required to perform a full backup and restore (which is a size of data operation).| |[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | - Migrate individual line-of-business application databases.<br/>- More control is needed for database migrations.<br/><br/>Supported sources:<br/>- SQL Server (2008 to 2019) on-premises or Azure VM<br/>- AWS EC2<br/>- AWS RDS<br/>- GCP Compute SQL Server VM | - The migration entails making full database backups on SQL Server and copying backup files to Azure Blob Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance.<br/>- Databases being restored during the migration process will be in a restoring mode and can't be used for read or write workloads until the process is complete.|
azure-sql Sql Server To Sql On Azure Vm Individual Databases Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide.md
For information about extra migration strategies, see the [SQL Server VM migrati
Migrating to SQL Server on Azure Virtual Machines requires the following resources: -- [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+- [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
- An [Azure Migrate project](../../../migrate/create-manage-projects.md) (only required for SQL Server discovery in your data estate). - A prepared target [SQL Server on Azure Virtual Machines](../../virtual-machines/windows/create-sql-vm-portal.md) instance that's the same or greater version than the SQL Server source. - [Connectivity between Azure and on-premises](/azure/architecture/reference-architectures/hybrid-networking).
For more discovery tools, see the [services and tools](../../../dms/dms-tools-ma
When migrating from SQL Server on-premises to SQL Server on Azure Virtual Machines, it is unlikely that you'll have any compatibility or feature parity issues if the source and target SQL Server versions are the same. If you're *not* upgrading the version of SQL Server, skip this step and move to the [Migrate](#migrate) section.
-Before migration, it's still a good practice to run an assessment of your SQL Server databases to identify migration blockers (if any) and the [Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) does that before migration.
+Before migration, it's still a good practice to run an assessment of your SQL Server databases to identify migration blockers (if any) and the [Azure SQL migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) does that before migration.
[!INCLUDE [assess-estate-with-azure-migrate](../../../../includes/azure-migrate-to-assess-sql-data-estate.md)] #### Assess user databases
-The [Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) provides a seamless wizard based experience to assess, get Azure recommendations and migrate your SQL Server databases on-premises to SQL Server on Azure Virtual Machines. Besides, highlighting any migration blockers or warnings, the extension also includes an option for Azure recommendations to collect your databases' performance data to recommend a right-sized SQL Server on Azure Virtual Machines to meet the performance needs of your workload (with the least price).
+The [Azure SQL migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) provides a seamless wizard based experience to assess, get Azure recommendations and migrate your SQL Server databases on-premises to SQL Server on Azure Virtual Machines. Besides, highlighting any migration blockers or warnings, the extension also includes an option for Azure recommendations to collect your databases' performance data to recommend a right-sized SQL Server on Azure Virtual Machines to meet the performance needs of your workload (with the least price).
To learn more about Azure recommendations, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](../../../dms/ads-sku-recommend.md). > [!IMPORTANT]
->To assess databases using the Azure SQL Migration extension, ensure that the logins used to connect the source SQL Server are members of the sysadmin server role or have CONTROL SERVER permission.
+>To assess databases using the Azure SQL migration extension, ensure that the logins used to connect the source SQL Server are members of the sysadmin server role or have CONTROL SERVER permission.
For a version upgrade, use [Data Migration Assistant](/sql/dma/dma-overview) to assess on-premises SQL Server instances if you are upgrading to an instance of SQL Server on Azure Virtual Machines with a higher version to understand the gaps between the source and target versions.
After you've completed the pre-migration steps, you're ready to migrate the user
The following sections provide steps for performing either a migration by using backup and restore or a minimal downtime migration by using backup and restore along with log shipping.
-### Migrate using the Azure SQL Migration extension for Azure Data Studio (minimal downtime)
+### Migrate using the Azure SQL migration extension for Azure Data Studio (minimal downtime)
To perform a minimal downtime migration using Azure Data Studio, follow the high level steps below. For a detailed step-by-step tutorial, see [Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio](../../../dms/tutorial-sql-server-to-virtual-machine-online-ads.md):
-1. Download and install [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) and the [Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+1. Download and install [Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio) and the [Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
1. Launch the Migrate to Azure SQL wizard in the extension in Azure Data Studio. 1. Select databases for assessment and view migration readiness or issues (if any). Additionally, collect performance data and get right-sized Azure recommendation. 1. Select your Azure account and your target SQL Server on Azure Machine from your subscription.
azure-sql Sql Server To Sql On Azure Vm Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md
Save on costs by bringing your own license with the [Azure Hybrid Benefit licens
Azure Virtual Machines run in many different regions of Azure and also offer a variety of [machine sizes](../../../virtual-machines/sizes.md) and [Storage options](../../../virtual-machines/disks-types.md). When determining the correct size of VM and Storage for your SQL Server workload, refer to the [Performance Guidelines for SQL Server on Azure Virtual Machines.](../../virtual-machines/windows/performance-guidelines-best-practices-checklist.md#vm-size).
-You can use the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to get right-sized SQL Server on Azure Virtual Machines recommendation. The extension collects performance data from your source SQL Server instance to provide right-sized Azure recommendation that meets your workload's performance needs with minimal cost. To learn more, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](../../../dms/ads-sku-recommend.md)
+You can use the [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to get right-sized SQL Server on Azure Virtual Machines recommendation. The extension collects performance data from your source SQL Server instance to provide right-sized Azure recommendation that meets your workload's performance needs with minimal cost. To learn more, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](../../../dms/ads-sku-recommend.md)
To determine the VM size and storage requirements for all your workloads in your data estate, it is recommended that these are sized through a Performance-Based [Azure Migrate Assessment](../../../migrate/concepts-assessment-calculation.md#types-of-assessments). If this is not an available option, see the following article on creating your own [baseline for performance](https://azure.microsoft.com/services/virtual-machines/sql-server/).
The following table describes differences in the two migration strategies:
| **Migration strategy** | **Description** | **When to use** | | | | | | **Lift & shift** | Use the lift and shift migration strategy to move the entire physical or virtual SQL Server from its current location onto an instance of SQL Server on Azure VM without any changes to the operating system, or SQL Server version. To complete a lift and shift migration, see [Azure Migrate](../../../migrate/migrate-services-overview.md). <br /><br /> The source server remains online and services requests while the source and destination server synchronize data allowing for an almost seamless migration. | Use for single to very large-scale migrations, even applicable to scenarios such as data center exit. <br /><br /> Minimal to no code changes required to user SQL databases or applications, allowing for faster overall migrations. <br /><br />No additional steps required for migrating the Business Intelligence services such as [SSIS](/sql/integration-services/sql-server-integration-services), [SSRS](/sql/reporting-services/create-deploy-and-manage-mobile-and-paginated-reports), and [SSAS](/analysis-services/analysis-services-overview). |
-|**Migrate** | Use a migration strategy when you want to upgrade the target SQL Server and/or operating system version. <br /> <br /> Select an Azure VM from Azure Marketplace or a prepared SQL Server image that matches the source SQL Server version. <br/> <br/> Use the [Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) to assess, get recommendations for right-sized Azure configuration (VM series, compute and storage) and migrate SQL Server database(s) to SQL Server on Azure virtual machines with minimal downtime. | Use when there is a requirement or desire to migrate to SQL Server on Azure Virtual Machines, or if there is a requirement to upgrade legacy SQL Server and/or OS versions that are no longer in support. <br /> <br /> May require some application or user database changes to support the SQL Server upgrade. <br /><br />There may be additional considerations for migrating [Business Intelligence](#business-intelligence) services if in the scope of migration. |
+|**Migrate** | Use a migration strategy when you want to upgrade the target SQL Server and/or operating system version. <br /> <br /> Select an Azure VM from Azure Marketplace or a prepared SQL Server image that matches the source SQL Server version. <br/> <br/> Use the [Azure SQL migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) to assess, get recommendations for right-sized Azure configuration (VM series, compute and storage) and migrate SQL Server database(s) to SQL Server on Azure virtual machines with minimal downtime. | Use when there is a requirement or desire to migrate to SQL Server on Azure Virtual Machines, or if there is a requirement to upgrade legacy SQL Server and/or OS versions that are no longer in support. <br /> <br /> May require some application or user database changes to support the SQL Server upgrade. <br /><br />There may be additional considerations for migrating [Business Intelligence](#business-intelligence) services if in the scope of migration. |
## Lift and shift
The following table details all available methods to migrate your SQL Server dat
|**Method** | **Minimum source version** | **Minimum target version** | **Source backup size constraint** | **Notes** | | | | | | |
-| **[Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md)** | SQL Server 2008 | SQL Server 2008 | [Azure VM storage limit](../../../index.yml) | This is an easy to use wizard based extension in Azure Data Studio for migrating SQL Server database(s) to SQL Server on Azure virtual machines. Use compression to minimize backup size for transfer. <br /><br /> The Azure SQL Migration extension for Azure Data Studio provides assessment, Azure recommendation and migration capabilities in a simple user interface and supports minimal downtime migrations. |
+| **[Azure SQL migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md)** | SQL Server 2008 | SQL Server 2008 | [Azure VM storage limit](../../../index.yml) | This is an easy to use wizard based extension in Azure Data Studio for migrating SQL Server database(s) to SQL Server on Azure virtual machines. Use compression to minimize backup size for transfer. <br /><br /> The Azure SQL migration extension for Azure Data Studio provides assessment, Azure recommendation and migration capabilities in a simple user interface and supports minimal downtime migrations. |
| **[Distributed availability group](sql-server-distributed-availability-group-migrate-prerequisites.md)** | SQL Server 2016| SQL Server 2016 | [Azure VM storage limit](../../../index.yml) | A [distributed availability group](/sql/database-engine/availability-groups/windows/distributed-availability-groups) is a special type of availability group that spans two separate availability groups. The availability groups that participate in a distributed availability group do not need to be in the same location and include cross-domain support. <br /><br /> This method minimizes downtime, use when you have an availability group configured on-premises. <br /><br /> **Automation & scripting**: [T-SQL](/sql/t-sql/statements/alter-availability-group-transact-sql) | | **[Backup to a file](sql-server-to-sql-on-azure-vm-individual-databases-guide.md#migrate)** | SQL Server 2008 SP4 | SQL Server 2008 SP4| [Azure VM storage limit](../../../index.yml) | This is a simple and well-tested technique for moving databases across machines. Use compression to minimize backup size for transfer. <br /><br /> **Automation & scripting**: [Transact-SQL (T-SQL)](/sql/t-sql/statements/backup-transact-sql) and [AzCopy to Blob storage](../../../storage/common/storage-use-azcopy-v10.md) | | **[Backup to URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url)** | SQL Server 2012 SP1 CU2 | SQL Server 2012 SP1 CU2| 12.8 TB for SQL Server 2016, otherwise 1 TB | An alternative way to move the backup file to the VM using Azure storage. Use compression to minimize backup size for transfer. <br /><br /> **Automation & scripting**: [T-SQL or maintenance plan](/sql/relational-databases/backup-restore/sql-server-backup-to-url) |
The following is a list of key points to consider when reviewing migration metho
- For optimum data transfer performance, migrate databases and files onto an instance of SQL Server on Azure VM using a compressed backup file. For larger databases, in addition to compression, [split the backup file into smaller files](/sql/relational-databases/backup-restore/back-up-files-and-filegroups-sql-server) for increased performance during backup and transfer. - If migrating from SQL Server 2014 or higher, consider [encrypting the backups](/sql/relational-databases/backup-restore/backup-encryption) to protect data during network transfer.-- To minimize downtime during database migration, use the Azure SQL Migration extension in Azure Data Studio or Always On availability group option.
+- To minimize downtime during database migration, use the Azure SQL migration extension in Azure Data Studio or Always On availability group option.
- For limited to no network options, use offline migration methods such as backup and restore, or [disk transfer services](../../../storage/common/storage-solution-large-dataset-low-network.md) available in Azure. - To also change the version of SQL Server on a SQL Server on Azure VM, see [change SQL Server edition](../../virtual-machines/windows/change-sql-server-edition.md).
cdn Cdn Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-overview.md
For a complete list of features that each Azure CDN product supports, see [Compa
- To get started with CDN, see [Create an Azure CDN profile and endpoint](cdn-create-new-endpoint.md). - Manage your CDN endpoints through the [Microsoft Azure portal](https://portal.azure.com) or with [PowerShell](cdn-manage-powershell.md). - Learn how to automate Azure CDN with [.NET](cdn-app-dev-net.md) or [Node.js](cdn-app-dev-node.md).
+- [Learn module: Introduction to Azure Content Delivery Network (CDN)](/learn/modules/intro-to-azure-content-delivery-network).
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Data with these errors will not be used for training. Imported data with errors
| Audio | Low sampling rate| The sampling rate of the .wav files can't be lower than 16 KHz.| | Audio | Too long audio| Audio duration is longer than 30 seconds. Split the long audio into multiple files. It's a good idea to make utterances shorter than 15 seconds.| | Audio | No valid audio| No valid audio is found in this dataset. Check your audio data and upload again.|
+| Mismatch | Low scored utterance| Sentence-level pronunciation score is lower than 70. Review the script and the audio content to make sure they match.|
**Auto-fixed**
cognitive-services How To Develop Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-develop-custom-commands-application.md
Then, configure and host a web endpoint that returns a JSON file that lists the
"lights" : [ "bulb", "bulbs",
- "light"
+ "light",
"light bulb" ], "tv" : [
cognitive-services How To Use Custom Entity Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-custom-entity-pattern-matching.md
In this guide, you use the Speech SDK to develop a console application that deri
> - Add custom entities via the Speech SDK API > - Use asynchronous, event-driven continuous recognition
-## When should you use this?
+## When to use pattern matching
-Use this sample code if:
+Use this sample code if:
+* You're only interested in matching strictly what the user said. These patterns match more aggressively than LUIS.
+* You don't have access to a [LUIS](../LUIS/index.yml) app, but still want intents.
+* You can't or don't want to create a [LUIS](../LUIS/index.yml) app but you still want some voice-commanding capability.
-- You are only interested in matching very strictly what the user said. These patterns match more aggressively than LUIS.-- You do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents. This can be helpful since it is embedded within the SDK.-- You cannot or do not want to create a LUIS app but you still want some voice-commanding capability.-
-If you do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents, this can be helpful since it is embedded within the SDK.
-
-For supported locales see [here](./language-support.md?tabs=IntentRecognitionPatternMatcher).
+For more information, see the [pattern matching overview](./pattern-matching-overview.md).
## Prerequisites
Be sure you have the following items before you begin this guide:
- A [Cognitive Services Azure resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) or a [Unified Speech resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) - [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) (any edition).
-## Pattern Matching Model overview
-- ::: zone pivot="programming-language-csharp" [!INCLUDE [csharp](includes/how-to/intent-recognition/csharp/pattern-matching.md)] ::: zone-end
cognitive-services How To Use Simple Language Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-simple-language-pattern-matching.md
Previously updated : 11/15/2021 Last updated : 04/19/2022 zone_pivot_groups: programming-languages-set-nine
In this guide, you use the Speech SDK to develop a C++ console application that
> - Recognize speech from a microphone > - Use asynchronous, event-driven continuous recognition
-## When should you use this?
+## When to use pattern matching
Use this sample code if:
-* You are only interested in matching very strictly what the user said. These patterns match more aggressively than LUIS.
-* You do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents. This can be helpful since it is embedded within the SDK.
-* You cannot or do not want to create a [LUIS](../LUIS/index.yml) app but you still want some voice-commanding capability.
+* You're only interested in matching strictly what the user said. These patterns match more aggressively than LUIS.
+* You don't have access to a [LUIS](../LUIS/index.yml) app, but still want intents.
+* You can't or don't want to create a [LUIS](../LUIS/index.yml) app but you still want some voice-commanding capability.
-If you do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents, this can be helpful since it is embedded within the SDK.
-
-For supported locales see [here](./language-support.md?tabs=IntentRecognitionPatternMatcher).
+For more information, see the [pattern matching overview](./pattern-matching-overview.md).
## Prerequisites
Be sure you have the following items before you begin this guide:
The simple patterns are a feature of the Speech SDK and need a Cognitive Services resource or a Unified Speech resource.
-A pattern is a phrase that includes an Entity somewhere within it. An Entity is defined by wrapping a word in curly brackets. For example:
+A pattern is a phrase that includes an Entity somewhere within it. An Entity is defined by wrapping a word in curly brackets. This example defines an Entity with the ID "floorName", which is case-sensitive:
``` Take me to the {floorName} ```
-This defines an Entity with the ID "floorName" which is case-sensitive.
- All other special characters and punctuation will be ignored. Intents will be added using calls to the IntentRecognizer->AddIntent() API.
Intents will be added using calls to the IntentRecognizer->AddIntent() API.
::: zone pivot="programming-language-cpp" [!INCLUDE [cpp](includes/how-to/intent-recognition/cpp/simple-pattern-matching.md)]+
+## Next steps
+
+* Improve your pattern matching by using [custom entities](how-to-use-custom-entity-pattern-matching.md).
+* Look through our [GitHub samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk).
cognitive-services Pattern Matching Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pattern-matching-overview.md
keywords: intent recognition pattern matching
# What is pattern matching? +
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
See below for information about changes to Speech services and resources.
## What's new?
+* Speech SDK 1.21.0 and Speech CLI 1.21.0 were released in April 2022. See details below.
* Custom speech-to-text container v3.1.0 released in March 2022, with support to get display models.
-* STT Service January 2022, added 18 new locales.
-* Speech SDK 1.20.0 released January 2022. Updates include extended programming language support for DialogServiceConnector, Unity on Linux, enhancements to IntentRecognizer, added support for Python 3.10, and a fix to remove a 10-second delay while stopping a speech recognizer (when using a PushAudioInputStream, and no new audio is pushed in after StopContinuousRecognition is called).
-* Speech CLI 1.20.0 released January 2022. Updates include microphone input for Speaker recognition and expanded support for Intent recognition.
-* TTS Service January 2022, added 10 new languages and variants for Neural text-to-speech and new voices in preview for en-GB, fr-FR and de-DE.
+* TTS Service March 2022, public preview of Cheerful and Sad styles with fr-FR-DeniseNeural.
+* TTS Service February 2022, public preview of Custom Neural Voice Lite, extended CNV language support to 49 locales.
## Release notes
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The following table has descriptions of each supported role.
## Adjust speaking languages
-All neural voices are multilingual. By default, they are fluent in their own language and English without using `<lang xml:lang>` element. For example, if the input text in English is "I'm excited to try text to speech" and you use the `es-ES-ElviraNeural` voice, the text is spoken in English with a Spanish accent. With most neural voices, setting a specific speaking language with `<lang xml:lang>` element at the sentence or word level is currently not supported.
+By default, all neural voices are fluent in their own language and English without using the `<lang xml:lang>` element. For example, if the input text in English is "I'm excited to try text to speech" and you use the `es-ES-ElviraNeural` voice, the text is spoken in English with a Spanish accent. With most neural voices, setting a specific speaking language with `<lang xml:lang>` element at the sentence or word level is currently not supported.
You can adjust the speaking language for the `en-US-JennyMultilingualNeural` neural voice at the sentence level and word level by using the `<lang xml:lang>` element. The `en-US-JennyMultilingualNeural` neural voice is multilingual in 14 languages (For example: English, Spanish, and Chinese). The supported languages are provided in a table following the `<lang>` syntax and attribute definitions.
For more information, see [`addBookmarkReachedEventHandler`](/objectivec/cogniti
+## Supported MathML elements
+
+The Mathematical Markup Language (MathML) is an XML-compliant markup language that lets developers specify how input text is converted into synthesized speech by using text-to-speech.
+
+> [!NOTE]
+> The MathML elements (tags) are currently supported by all neural voices in the `en-US` and `en-AU` locales.
+
+**Example**
+
+This SSML snippet demonstrates how the MathML elements are used to output synthesized speech. The text-to-speech output for this example is "a squared plus b squared equals c squared".
+
+```xml
+<math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi>a</mi><mn>2</mn></msup><mo>+</mo><msup><mi>b</mi><mn>2</mn></msup><mo>=</mo><msup><mi>c</mi><mn>2</mn></msup></math>
+```
+The `xmlns` attribute in `<math xmlns="http://www.w3.org/1998/Math/MathML">` is optional.
+
+All elements from the [MathML 2.0](https://www.w3.org/TR/MathML2/) and [MathML 3.0](https://www.w3.org/TR/MathML3/) specifications are supported, except the MathML 3.0 [Elementary Math](https://www.w3.org/TR/MathML3/chapter3.html#presm.elementary) elements. The `semantics`, `annotation`, and `annotation-xml` elements don't output speech, so they are ignored.
+
+> [!NOTE]
+> If an element is not recognized, it will be ignored, and the child elements within it will still be processed.
+
+The MathML entities are not supported by XML syntax, so you must use the their corresponding [unicode characters](https://www.w3.org/2003/entities/2007/htmlmathml.json) to represent the entities, for example, the entity `&copy;` should be represented by its unicode characters `&#x00A9;`, otherwise an error will occur.
+ ## Next steps [Language support: Voices, locales, languages](language-support.md)
communication-services Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-flows.md
For Alice it will be the NAT of the coffee shop and for Bob it will be the NAT o
### Case 3: VoIP where neither a direct nor NAT connection is possible
-If one or both client devices are behind a symmetric NAT, a separate cloud service to relay the media between the two SDKs is required. This service is called TURN (Traversal Using Relays around NAT) and is also provided by the Communication Services. The Communication Services Calling SDK automatically uses TURN services based on detected network conditions.
+If one or both client devices are behind a symmetric NAT, a separate cloud service to relay the media between the two SDKs is required. This service is called TURN (Traversal Using Relays around NAT) and is also provided by the Communication Services. The Communication Services Calling SDK automatically uses TURN services based on detected network conditions. TURN charges are included in the price of the call.
:::image type="content" source="./media/call-flows/about-voice-case-3.png" alt-text="Diagram showing a VOIP call which utilizes a TURN connection.":::
The default real-time protocol (RTP) for group calls is User Datagram Protocol (
:::image type="content" source="./media/call-flows/about-voice-group-calls.png" alt-text="Diagram showing UDP media process flow within Communication Services.":::
-If the SDK can't use UDP for media due to firewall restrictions, an attempt will be made to use the Transmission Control Protocol (TCP). Note that the Media Processor component requires UDP, so when this happens, the Communication Services TURN service will be added to the group call to translate TCP to UDP. TURN charges will be incurred in this case unless TURN capabilities are manually disabled.
+If the SDK can't use UDP for media due to firewall restrictions, an attempt will be made to use the Transmission Control Protocol (TCP). Note that the Media Processor component requires UDP, so when this happens, the Communication Services TURN service will be added to the group call to translate TCP to UDP. TURN charges are included in the price of the call.
:::image type="content" source="./media/call-flows/about-voice-group-calls-2.png" alt-text="Diagram showing TCP media process flow within Communication Services.":::
communication-services Get Started Raw Media Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-raw-media-access.md
Title: Quickstart - Add RAW media access to your app (Android) description: In this quickstart, you'll learn how to add raw media access calling capabilities to your app using Azure Communication Services.-+ - Previously updated : 04/19/2022+ Last updated : 11/18/2021
In this quickstart, you'll learn how implement raw media access using the Azure Communication Services Calling SDK for Android.
+## Outbound virtual video device
+ The Azure Communication Services Calling SDK offers APIs allowing apps to generate their own video frames to send to remote participants. This quick start builds upon [QuickStart: Add 1:1 video calling to your app](./get-started-with-video-calling.md?pivots=platform-android) for Android.
-## Virtual Video Stream Overview
+## Overview
+
+Once an outbound virtual video device is created, use DeviceManager to make a new virtual video device that behaves just like any other webcam connected to your computer or mobile phone.
Since the app will be generating the video frames, the app must inform the Azure Communication Services Calling SDK about the video formats the app is capable of generating. This is required to allow the Azure Communication Services Calling SDK to pick the best video format configuration given the network conditions at any giving time. The app must register a delegate to get notified about when it should start or stop producing video frames. The delegate event will inform the app which video format is more appropriate for the current network conditions.
-The following is an overview of the steps required to create a virtual video stream.
+The following is an overview of the steps required to create an outbound virtual video device.
+
+1. Create a `VirtualDeviceIdentification` with basic identification information for the new outbound virtual video device.
+
+ ```java
+ VirtualDeviceIdentification deviceId = new VirtualDeviceIdentification();
+ deviceId.setId("QuickStartVirtualVideoDevice");
+ deviceId.setName("My First Virtual Video Device");
+ ```
-1. Create an array of `VideoFormat` with the video formats supported by the app. It is fine to have only one video format supported, but at least one of the provided video formats must be of the `VideoFrameKind::VideoSoftware` type. When multiple formats are provided, the order of the format in the list does not influence or prioritize which one will be used. The selected format is based on external factors like network bandwidth.
+2. Create an array of `VideoFormat` with the video formats supported by the app. It is fine to have only one video format supported, but at least one of the provided video formats must be of the `MediaFrameKind::VideoSoftware` type. When multiple formats are provided, the order of the format in the list does not influence or prioritize which one will be used. The selected format is based on external factors like network bandwidth.
```java ArrayList<VideoFormat> videoFormats = new ArrayList<VideoFormat>();
The following is an overview of the steps required to create a virtual video str
format.setWidth(1280); format.setHeight(720); format.setPixelFormat(PixelFormat.RGBA);
- format.setMediaFrameKind(VideoFrameKind.VIDEO_SOFTWARE);
+ format.setMediaFrameKind(MediaFrameKind.VIDEO_SOFTWARE);
format.setFramesPerSecond(30); format.setStride1(1280 * 4); // It is times 4 because RGBA is a 32-bit format. videoFormats.add(format); ```
-2. Create `OutgoingVirtualVideoStreamOptions` and set `VideoFormats` with the previously created object.
+3. Create `OutboundVirtualVideoDeviceOptions` and set `DeviceIdentification` and `VideoFormats` with the previously created objects.
```java
- OutgoingVirtualVideoStreamOptions options = new OutgoingVirtualVideoStreamOptions();
- options.setVideoFormats(videoFormats);
- ```
-
-3. Subscribe to `OutgoingVirtualVideoStreamOptions::addOnOutgoingVideoStreamStateChangedListener` delegate. This delegate will inform the state of the current stream, its important that you do not send frames if the state is no equal to `OutgoingVideoStreamState.STARTED`.
+ OutboundVirtualVideoDeviceOptions m_options = new OutboundVirtualVideoDeviceOptions();
- ```java
- private OutgoingVideoStreamState outgoingVideoStreamState;
+ // ...
- options.addOnOutgoingVideoStreamStateChangedListener(event -> {
-
- outgoingVideoStreamState = event.getOutgoingVideoStreamState();
- });
+ m_options.setDeviceIdentification(deviceId);
+ m_options.setVideoFormats(videoFormats);
```
-4. Make sure the `OutgoingVirtualVideoStreamOptions::addOnVideoFrameSenderChangedListener` delegate is defined. This delegate will inform its listener about events requiring the app to start or stop producing video frames. In this quick start, `mediaFrameSender` is used as trigger to let the app know when it's time to start generating frames. Feel free to use any mechanism in your app as a trigger.
+4. Make sure the `OutboundVirtualVideoDeviceOptions::OnFlowChanged` delegate is defined. This delegate will inform its listener about events requiring the app to start or stop producing video frames. In this quick start, `m_mediaFrameSender` is used as trigger to let the app know when it's time to start generating frames. Feel free to use any mechanism in your app as a trigger.
```java
- private VideoFrameSender mediaFrameSender;
+ private MediaFrameSender m_mediaFrameSender;
- options.addOnVideoFrameSenderChangedListener(event -> {
+ // ...
- mediaFrameSender = event.getMediaFrameSender();
+ m_options.addOnFlowChangedListener(virtualDeviceFlowControlArgs -> {
+ if (virtualDeviceFlowControlArgs.getMediaFrameSender().getRunningState() == VirtualDeviceRunningState.STARTED) {
+ // Tell the app's frame generator to start producing frames.
+ m_mediaFrameSender = virtualDeviceFlowControlArgs.getMediaFrameSender();
+ } else {
+ // Tell the app's frame generator to stop producing frames.
+ m_mediaFrameSender = null;
+ }
}); ```
-5. Create an instance of `VirtualVideoStream` using the `OutgoingVirtualVideoStreamOptions` we created previously
+5. Use `Device
```java
- private VirtualVideoStream virtualVideoStream;
+ private OutboundVirtualVideoDevice m_outboundVirtualVideoDevice;
+
+ // ...
- virtualVideoStream = new VirtualVideoStream(options);
+ m_outboundVirtualVideoDevice = m_deviceManager.createOutboundVirtualVideoDevice(m_options).get();
```
-7. Once outgoingVideoStreamState is equal to `OutgoingVideoStreamState.STARTED` create and instance of `FrameGenerator` class this will start a non-UI thread and will send frames, call `FrameGenerator.SetVideoFrameSender` each time we get an updated `VideoFrameSender` on the previous delegate, cast the `VideoFrameSender` to the appropriate type defined by the `VideoFrameKind` property of `VideoFormat`. For example, cast it to `SoftwareBasedVideoFrameSender` and then call the `send` method according to the number of planes defined by the MediaFormat.
-After that, create the ByteBuffer backing the video frame if needed. Then, update the content of the video frame. Finally, send the video frame to other participants with the `sendFrame` API.
+6. Tell device manager to use the recently created virtual camera on calls.
```java
- public class FrameGenerator {
-
- private VideoFrameSender videoFrameSender;
- private Thread frameIteratorThread;
- private final Random random;
- private volatile boolean stopFrameIterator = false;
-
- public FrameGenerator() {
+ private LocalVideoStream m_localVideoStream;
- random = new Random();
- }
-
- public void FrameIterator() {
-
- ByteBuffer plane = null;
- while (!stopFrameIterator && videoFrameSender != null) {
+ // ...
- plane = GenerateFrame(plane);
- }
+ for (VideoDeviceInfo videoDeviceInfo : m_deviceManager.getCameras())
+ {
+ String deviceId = videoDeviceInfo.getId();
+ if (deviceId.equalsIgnoreCase("QuickStartVirtualVideoDevice")) // Same id used in step 1.
+ {
+ m_localVideoStream = LocalVideoStream(videoDeviceInfo, getApplicationContext());
}
+ }
+ ```
- private ByteBuffer GenerateFrame(ByteBuffer plane)
- {
- try {
+7. In a non-UI thread or loop in the app, cast the `MediaFrameSender` to the appropriate type defined by the `MediaFrameKind` property of `VideoFormat`. For example, cast it to `SoftwareBasedVideoFrame` and then call the `send` method according to the number of planes defined by the MediaFormat.
+After that, create the ByteBuffer backing the video frame if needed. Then, update the content of the video frame. Finally, send the video frame to other participants with the `sendFrame` API.
- SoftwareBasedVideoFrameSender sender = (SoftwareBasedVideoFrameSender) videoFrameSender;
+ ```java
+ java.nio.ByteBuffer plane1 = null;
+ Random rand = new Random();
+ byte greyValue = 0;
+
+ // ...
+ java.nio.ByteBuffer plane1 = null;
+ Random rand = new Random();
+
+ while (m_outboundVirtualVideoDevice != null) {
+ while (m_mediaFrameSender != null) {
+ if (m_mediaFrameSender.getMediaFrameKind() == MediaFrameKind.VIDEO_SOFTWARE) {
+ SoftwareBasedVideoFrame sender = (SoftwareBasedVideoFrame) m_mediaFrameSender;
VideoFormat videoFormat = sender.getVideoFormat();
- long timeStamp = sender.getTimestamp();
- if (plane == null || videoFormat.getStride1() * videoFormat.getHeight() != plane.capacity()) {
+ // Gets the timestamp for when the video frame has been created.
+ // This allows better synchronization with audio.
+ int timeStamp = sender.getTimestamp();
- plane = ByteBuffer.allocateDirect(videoFormat.getStride1() * videoFormat.getHeight());
- plane.order(ByteOrder.nativeOrder());
+ // Adjusts frame dimensions to the video format that network conditions can manage.
+ if (plane1 == null || videoFormat.getStride1() * videoFormat.getHeight() != plane1.capacity()) {
+ plane1 = ByteBuffer.allocateDirect(videoFormat.getStride1() * videoFormat.getHeight());
+ plane1.order(ByteOrder.nativeOrder());
}
- int bandsCount = random.nextInt(15) + 1;
+ // Generates random gray scaled bands as video frame.
+ int bandsCount = rand.nextInt(15) + 1;
int bandBegin = 0; int bandThickness = videoFormat.getHeight() * videoFormat.getStride1() / bandsCount; for (int i = 0; i < bandsCount; ++i) {-
- byte greyValue = (byte) random.nextInt(254);
- java.util.Arrays.fill(plane.array(), bandBegin, bandBegin + bandThickness, greyValue);
+ byte greyValue = (byte)rand.nextInt(254);
+ java.util.Arrays.fill(plane1.array(), bandBegin, bandBegin + bandThickness, greyValue);
bandBegin += bandThickness; }
- FrameConfirmation fr = sender.sendFrame(plane, timeStamp).get();
+ // Sends video frame to the other participants in the call.
+ FrameConfirmation fr = sender.sendFrame(plane1, timeStamp).get();
+ // Waits before generating the next video frame.
+ // Video format defines how many frames per second app must generate.
Thread.sleep((long) (1000.0f / videoFormat.getFramesPerSecond())); }
- catch (InterruptedException ex) {
-
- ex.printStackTrace();
- }
- catch (ExecutionException ex2)
- {
- ex2.getMessage();
- }
-
- return plane;
}
- private void StartFrameIterator()
- {
- frameIteratorThread = new Thread(this::FrameIterator);
- frameIteratorThread.start();
- }
-
- public void StopFrameIterator()
- {
- try
- {
- if (frameIteratorThread != null)
- {
- stopFrameIterator = true;
- frameIteratorThread.join();
- frameIteratorThread = null;
- stopFrameIterator = false;
- }
- }
- catch (InterruptedException ex)
- {
- ex.getMessage();
- }
- }
-
- @Override
- public void SetVideoFrameSender(VideoFrameSender videoFramSender) {
-
- StopFrameIterator();
- this.videoFrameSender = videoFramSender;
- StartFrameIterator();
- }
+ // Virtual camera hasn't been created yet.
+ // Let's wait a little bit before checking again.
+ // This is for demo only purposes.
+ // Feel free to use a better synchronization mechanism.
+ Thread.sleep(100);
} ```-
-## Screen Share Video Stream Overview
-
-Repeat steps `1 to 4` from the previous VirtualVideoStream tutorial.
-
-Since the Android system generates the frames, you have to implement your own foreground service to capture the frames and send them through using our API
-
-The following is an overview of the steps required to create a screen share video stream.
-
-1. Add this permission to your `Manifest.xml` file inside your Android project
-
- ```xml
- <uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
- ```
-
-2. Create an instance of `ScreenShareVideoStream` using the `OutgoingVirtualVideoStreamOptions` we created previously
-
- ```java
- private ScreenShareVideoStream screenShareVideoStream;
-
- screenShareVideoStream = new ScreenShareVideoStream(options);
- ```
-
-3. Request needed permissions for screen capture on Android, once this method is called Android will call automatically `onActivityResult` containing the request code we have sent and the result of the operation, expect `Activity.RESULT_OK` if the permission has been provided by the user if so attach the screenShareVideoStream to the call and start your own foreground service to capture the frames.
-
- ```java
- public void GetScreenSharePermissions() {
-
- try {
-
- MediaProjectionManager mediaProjectionManager = (MediaProjectionManager) getSystemService(Context.MEDIA_PROJECTION_SERVICE);
- startActivityForResult(mediaProjectionManager.createScreenCaptureIntent(), Constants.SCREEN_SHARE_REQUEST_INTENT_REQ_CODE);
- } catch (Exception e) {
-
- String error = "Could not start screen share due to failure to startActivityForResult for mediaProjectionManager screenCaptureIntent";
- }
- }
-
- @Override
- protected void onActivityResult(int requestCode, int resultCode, Intent data) {
-
- super.onActivityResult(requestCode, resultCode, data);
-
- if (requestCode == Constants.SCREEN_SHARE_REQUEST_INTENT_REQ_CODE) {
-
- if (resultCode == Activity.RESULT_OK && data != null) {
-
- // Attach the screenShareVideoStream to the call
- // Start your foreground service
- } else {
-
- String error = "user cancelled, did not give permission to capture screen";
- }
- }
- }
- ```
-
-4. Once you receive a frame on your foreground service send it through using the `VideoFrameSender` provided
-
- ````java
- public void onImageAvailable(ImageReader reader) {
-
- Image image = reader.acquireLatestImage();
- if (image != null) {
-
- final Image.Plane[] planes = image.getPlanes();
- if (planes.length > 0) {
-
- Image.Plane plane = planes[0];
- final ByteBuffer buffer = plane.getBuffer();
- try {
-
- SoftwareBasedVideoFrameSender sender = (SoftwareBasedVideoFrameSender) videoFrameSender;
- sender.sendFrame(buffer, sender.getTimestamp()).get();
- } catch (Exception ex) {
-
- Log.d("MainActivity", "MainActivity.onImageAvailable trace, failed to send Frame");
- }
- }
-
- image.close();
- }
- }
- ````
cosmos-db Graph Visualization Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-visualization-partners.md
Users of all skill levels can take advantage of five unique graph layouts to dis
* [Live Demo for Tom Sawyer Databrowser](https://support.tomsawyer.com/demonstrations/graph.database.browser.demo/)
-* [Deploy on Azure](https://www.tomsawyer.com/cs/c/?cta_guid=b85cf3fc-2978-426d-afb3-c1f858f38e73&signature=AAH58kGNc5criGRMHSUptSOwyD0Znf3lFw&pageId=41375082967&placement_guid=d6cb1de7-6d51-4a89-a012-5a167870a715&click=7bc863ee-3c45-4509-9334-ac7674b7e75e&hsutk=4fa7e492076c5cecf5f03faad22b4a19&canon=https%3A%2F%2Fwww.tomsawyer.com%2Fgraph-database-browser&utm_referrer=https%3A%2F%2Fwww.tomsawyer.com%2F&portal_id=8313502&redirect_url=APefjpF0sV6YjeRqi4bQCt0-ubf_cmTi_nSs28RvMy55Vk01NIf6jtTaTj3GUMJ9D9z5DvIwvPnfSw89Wj9JCS_7cNss_HxsDmlT7wmeJh7BUyuPNEGYGnhucgeUZUzWGqrEeWmReCZByeMdklbMuikFnwasX6046Op7hKKiuQJx84RGd4fe1Rvq7mRLaaySZxdvLlpMg13N_4xo_GzrHRl4P2_VGZGPRUgkS3EvsvLzfJzH36u2HHDSG6AuU9ZRNgiJiH2wMLAgGQT-vDzkSTnYRb0ljRFHCq9kPjsbVDw1bTn0G9R5ZmTbdskypc49-Ob_49MdHif1ufRA9BMLU3Ks6t9TCVJ6fo4R5255u5FK2_v3Jk10yd7y_EhLqzrAv2ov-TzxDd6b&__hstc=169273150.4fa7e492076c5cecf5f03faad22b4a19.1608290688565.1626359177649.1626364757376.11&__hssc=169273150.1.1626364757376&__hsfp=3487988390&contentType=standard-page)
- ## Graphistry Graphistry automatically transforms your data into interactive, visual investigation maps built for the needs of analysts. It can quickly surface relationships between events and entities without having to write queries or wrangle data. You can harness your data without worrying about scale. You can detect security, fraud, and IT investigations to 3600 views of customers and supply chains, Graphistry turns the potential of your data into human insight and value.
cosmos-db How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-python.md
[!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)]
-This quickstart shows how to access the Azure Cosmos DB [Table API](https://docs.microsoft.com/azure/cosmos-db/table/introduction) from a Python application. The Cosmos DB Table API is a schemaless data store allowing applications to store structured NoSQL data in the cloud. Because data is stored in a schemaless design, new properties (columns) are automatically added to the table when an object with a new attribute is added to the table. Python applications can access the Cosmos DB Table API using the [Azure Data Tables SDK for Python](https://pypi.org/project/azure-data-tables/) package.
+This quickstart shows how to access the Azure Cosmos DB [Table API](introduction.md) from a Python application. The Cosmos DB Table API is a schemaless data store allowing applications to store structured NoSQL data in the cloud. Because data is stored in a schemaless design, new properties (columns) are automatically added to the table when an object with a new attribute is added to the table. Python applications can access the Cosmos DB Table API using the [Azure Data Tables SDK for Python](https://pypi.org/project/azure-data-tables/) package.
## Prerequisites The sample application is written in [Python3.6](https://www.python.org/downloads/), though the principles apply to all Python3.6+ applications. You can use [Visual Studio Code](https://code.visualstudio.com/) as an IDE.
-If you don't have an [Azure subscription](https://docs.microsoft.com/azure/guides/developer/azure-developer-guide#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/dotnet) before you begin.
+If you don't have an [Azure subscription](/azure/guides/developer/azure-developer-guide#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/dotnet) before you begin.
## Sample application
Log in to the [Azure portal](https://portal.azure.com/) and follow these steps t
### [Azure CLI](#tab/azure-cli)
-Cosmos DB accounts are created using the [az cosmosdb create](https://docs.microsoft.com/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resources must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
+Cosmos DB accounts are created using the [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resources must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Cosmos DB account names must also be unique across Azure.
-Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com/) or on a workstation with the [Azure CLI installed](https://docs.microsoft.com/cli/azure/install-azure-cli).
+Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com/) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
It typically takes several minutes for the Cosmos DB account creation process to complete.
az cosmosdb create \
### [Azure PowerShell](#tab/azure-powershell)
-Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](https://docs.microsoft.com/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Cosmos DB. As all Azure resources must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
+Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Cosmos DB. As all Azure resources must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
-Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](https://docs.microsoft.com/powershell/azure/install-az-ps).
+Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
It typically takes several minutes for the Cosmos DB account creation process to complete.
In the [Azure portal](https://portal.azure.com/), complete the following steps t
### [Azure CLI](#tab/azure-cli)
-Tables in Cosmos DB are created using the [az cosmosdb table create](https://docs.microsoft.com/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
+Tables in Cosmos DB are created using the [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
```azurecli COSMOS_TABLE_NAME='WeatherData'
az cosmosdb table create \
### [Azure PowerShell](#tab/azure-powershell)
-Tables in Cosmos DB are created using the [New-AzCosmosDBTable](https://docs.microsoft.com/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
+Tables in Cosmos DB are created using the [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
```azurepowershell $cosmosTableName = 'WeatherData'
To access your table(s) in Cosmos DB, your app will need the table connection st
### [Azure CLI](#tab/azure-cli)
-To get the primary connection string using Azure CLI, use the [az cosmosdb keys list](https://docs.microsoft.com/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command with the option `--type connection-strings`. This command uses a [JMESPath query](https://jmespath.org/) to display only the primary table connection string.
+To get the primary connection string using Azure CLI, use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command with the option `--type connection-strings`. This command uses a [JMESPath query](https://jmespath.org/) to display only the primary table connection string.
```azurecli # This gets the primary connection string
az cosmosdb keys list \
### [Azure PowerShell](#tab/azure-powershell)
-To get the primary connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](https://docs.microsoft.com/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+To get the primary connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
```azurepowershell # This gets the primary connection string
A resource group can be deleted using the [Azure portal](https://portal.azure.co
### [Azure CLI](#tab/azure-cli)
-To delete a resource group using the Azure CLI, use the [az group delete](https://docs.microsoft.com/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
```azurecli az group delete --name $RESOURCE_GROUP_NAME
az group delete --name $RESOURCE_GROUP_NAME
### [Azure PowerShell](#tab/azure-powershell)
-To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](https://docs.microsoft.com/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
```azurepowershell Remove-AzResourceGroup -Name $resourceGroupName
Remove-AzResourceGroup -Name $resourceGroupName
In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the Table API. > [!div class="nextstepaction"]
-> [Import table data to the Table API](https://docs.microsoft.com/azure/cosmos-db/table/table-import)
+> [Import table data to the Table API](table-import.md)
cost-management-billing Mca Setup Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-setup-account.md
tags: billing
Previously updated : 10/07/2021 Last updated : 04/20/2022
Before you start the setup, we recommend you do the following actions:
To complete the setup, you need the following access: -- Owner of the billing profile that was created when the Microsoft Customer Agreement was signed. To learn more about billing profiles, see [understand billing profiles](../understand/mca-overview.md#billing-profiles).
+- Owner of the billing account that was created when the Microsoft Customer Agreement was signed. To learn more about billing accounts, see [Your billing account](../understand/mca-overview.md#your-billing-account).
&mdash; And &mdash; - Enterprise administrator on the enrollment that is renewed.
You can use the following options to start the migration experience for your EA
`https://portal.azure.com/#blade/Microsoft_Azure_SubscriptionManagement/TransitionEnrollment`
-If you have both the enterprise administrator and billing account owner roles or billing profile role, you see the following page in the Azure portal. You can continue setting up your EA enrollments and Microsoft Customer Agreement billing account for transition.
+If you have both the enterprise administrator and billing account owner roles, you see the following page in the Azure portal. You can continue setting up your EA enrollments and Microsoft Customer Agreement billing account for transition.
:::image type="content" source="./media/mca-setup-account/setup-billing-account-page.png" alt-text="Screenshot showing the Set up your billing account page" lightbox="./media/mca-setup-account/setup-billing-account-page.png" :::
-If you don't have the enterprise administrator role for the enterprise agreement or the billing profile owner role for the Microsoft Customer Agreement, then use the following information to get the access that you need to complete setup.
+If you don't have the enterprise administrator role for the enterprise agreement or the billing account owner role for the Microsoft Customer Agreement, then use the following information to get the access that you need to complete setup.
#### If you're not an enterprise administrator on the enrollment
-You see the following page in the Azure portal if you have a billing account or billing profile owner role but you're not an enterprise administrator.
+You see the following page in the Azure portal if you have a billing account owner role but you're not an enterprise administrator.
:::image type="content" source="./media/mca-setup-account/setup-billing-account-page-not-ea-administrator.png" alt-text="Screenshot showing the Set up your billing account page - Prepare your Enterprise Agreement enrollments for transition." lightbox="./media/mca-setup-account/setup-billing-account-page-not-ea-administrator.png" ::: You have two options: - Ask the enterprise administrator of the enrollment to give you the enterprise administrator role. For more information, see [Create another enterprise administrator](ea-portal-administration.md#create-another-enterprise-administrator).-- You can give an enterprise administrator the billing account owner or billing profile owner role. For more information, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
+- You can give an enterprise administrator the billing account owner role. For more information, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
If you're given the enterprise administrator role, copy the link on the Set up your billing account page. Open it in your web browser to continue setting up your Microsoft Customer Agreement. Otherwise, send it to the enterprise administrator.
-#### If you're not an owner of the billing profile
+#### If you're not an owner of the billing account
If you're an enterprise administrator but you don't have a billing account, you'll see the following error in the Azure portal that prevents the transition.
-If you believe that you have billing profile owner access to the correct Microsoft Customer Agreement and you see the following message, make sure that you are in the correct tenant for your organization. You might need to change directories.
+If you believe that you have billing account owner access to the correct Microsoft Customer Agreement and you see the following message, make sure that you are in the correct tenant for your organization. You might need to change directories.
:::image type="content" source="./media/mca-setup-account/setup-billing-account-page-not-billing-account-profile-owner.png" alt-text="Screenshot showing the Set up your billing account page - Microsoft Customer Agreement billing account." lightbox="./media/mca-setup-account/setup-billing-account-page-not-billing-account-profile-owner.png" ::: You have two options: -- Ask an existing billing account owner to give you the billing account owner or billing profile owner role. For more information, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal)
+- Ask an existing billing account owner to give you the billing account owner role. For more information, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal)
- Give the enterprise administrator role to an existing billing account owner. For more information, see [Create another enterprise administrator](ea-portal-administration.md#create-another-enterprise-administrator).
-If you're given the billing account owner or billing profile owner role, copy the link on the Set up your billing account page. Open it in your web browser to continue setting up your Microsoft Customer Agreement. Otherwise, send the link to the billing account owner.
+If you're given the billing account owner role, copy the link on the Set up your billing account page. Open it in your web browser to continue setting up your Microsoft Customer Agreement. Otherwise, send the link to the billing account owner.
#### Prepare enrollment for transition
-After you have owner access to both your EA enrollment and billing profile, you prepare them for transition.
+After you have owner access to both your EA enrollment and billing account, you prepare them for transition.
Open the migration that you were presented previously, or open the link that you were sent in email. The link is `https://portal.azure.com/#blade/Microsoft_Azure_SubscriptionManagement/TransitionEnrollment`.
The following image shows and example of the Prepare your enterprise agreement e
:::image type="content" source="./media/mca-setup-account/setup-billing-account-prepare-enrollment-transition.png" alt-text="Screenshot showing the Set up your billing account page - Prepare your Enterprise Agreement enrollments for transition ready for selections." lightbox="./media/mca-setup-account/setup-billing-account-prepare-enrollment-transition.png" :::
-Next, select the source enrollment to transition. Then select the billing account and billing profile. If validation passes without any problems similar to the following screen, select **Continue** to proceed.
+Next, select the source enrollment to transition. Then select the billing account. If validation passes without any problems similar to the following screen, select **Continue** to proceed.
:::image type="content" source="./media/mca-setup-account/setup-billing-account-prepare-enrollment-transition-continue.png" alt-text="Screenshot showing the Set up your billing account page - Prepare your Enterprise Agreement enrollments for transition with validated choices." lightbox="./media/mca-setup-account/setup-billing-account-prepare-enrollment-transition-continue.png" :::
If your enrollment still has credits, you'll see the following error that preven
`Select another enrollment. This enrollment still has credits and can't be transitioned to a billing account.`
-If you don't have owner permission to the billing profile, you'll see the following error that prevents the transition. You must the have billing profile owner role before before you can transition your enrollment.
-
-`Select another Billing Profile. You do not have owner permission to this profile.`
- If your new billing profile doesn't have the new plan enabled, you'll see the following error. You must enable the plan before you can transition your enrollment. `Select another Billing Profile. The current selection does not have Azure Plan and Azure dev test plan enabled on it.`
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Azure Resource Manager restricts template size to be 4-MB. Limit the size of you
For small to medium solutions, a single template is easier to understand and maintain. You can see all the resources and values in a single file. For advanced scenarios, linked templates enable you to break down the solution into targeted components. Follow best practice at [Using Linked and Nested Templates](../azure-resource-manager/templates/linked-templates.md?tabs=azure-powershell).
-### DevOps API limit of 20 MB causes ADF trigger twice instead of once
+### DevOps API limit of 20 MB causes ADF trigger twice or more instead of once
#### Issue
-While publishing ADF resources, the azure pipeline triggers twice instead of once.
+While publishing ADF resources, the azure pipeline triggers twice or more instead of once.
#### Cause
-DevOps has limitation of 20-MB REST api load for arm templates, linked template and global parameters. Large ADF resources are reorganized to get around GitHub API rate limits. That may rarely cause ADF DevOps APIs hit 20-MB limit.
+Azure DevOps has the 20 MB Rest API limit. When the ARM template exceeds this size, ADF internally splits the template file into multiple files with linked templates to solve this issue. As a side effect, this split could result in customer's triggers being run more than once.
#### Resolution
-Use ADF **Automated publish** (preferred) or **manual trigger** method to trigger once instead of twice.
+Use ADF **Automated publish** (preferred) or **manual trigger** method to trigger once instead of twice or more.
### Cannot connect to GIT Enterprise
Following section is not valid because package.json folder is not valid.
``` It should have DataFactory included in customCommand like *'run build validate $(Build.Repository.LocalPath)/DataFactory/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName'*. Make sure the generated YAML file for higher stage should have required JSON artifacts.
-### Git Repository or Microsoft Purview Connection Disconnected
+### Git Repository or Microsoft Purview connection disconnected
#### Issue When deploying a service instance, the git repository or purview connection is disconnected.
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sharepoint-online-list.md
Previously updated : 09/09/2021 Last updated : 04/18/2022 # Copy data from SharePoint Online List by using Azure Data Factory or Azure Synapse Analytics
The SharePoint List Online connector uses service principal authentication to co
2. Grant SharePoint Online site permission to your registered application:
- > [!NOTE]
- > This operation requires SharePoint Online site owner permission. You can find the owner by going to the site home page -> click the "X members" in the right corner -> check who has the "Owner" role.
- 1. Open SharePoint Online site link e.g. `https://[your_site_url]/_layouts/15/appinv.aspx` (replace the site URL). 2. Search the application ID you registered, fill the empty fields, and click "Create". - App Domain: `localhost.com` - Redirect URL: `https://www.localhost.com`
- - Permission Request XML:
+ - Permission Request XML
+ For the site owner role, the Permission Request XML is:
+
+ ```xml
+ <AppPermissionRequests>
+ <AppPermissionRequest Scope="http://sharepoint/content/sitecollection/web" Right="Read"/>
+ </AppPermissionRequests>
+ ```
+
+ :::image type="content" source="media/connector-sharepoint-online-list/sharepoint-online-grant-permission-owner.png" alt-text="Grant SharePoint Online site permission to your registered application when you have site owner role.":::
+
+ > [!NOTE]
+ > You can find the site owner by going to the site home page -> select **Settings** in the top right corner -> select **Site permissions** and check who has the site owner role.
+
+ For the site admin role, the Permission Request XML is:
- ```xml
- <AppPermissionRequests AllowAppOnlyPolicy="true">
- <AppPermissionRequest Scope="http://sharepoint/content/sitecollection/web" Right="Read"/>
- </AppPermissionRequests>
- ```
+ ```xml
+ <AppPermissionRequests AllowAppOnlyPolicy="true">
+ <AppPermissionRequest Scope="http://sharepoint/content/sitecollection/web" Right="Read"/>
+ </AppPermissionRequests>
+ ```
- :::image type="content" source="media/connector-sharepoint-online-list/sharepoint-online-grant-permission.png" alt-text="sharepoint grant permission":::
+ :::image type="content" source="media/connector-sharepoint-online-list/sharepoint-online-grant-permission-admin.png" alt-text="Grant SharePoint Online site permission to your registered application when you have site admin role.":::
3. Click "Trust It" for this app.
data-factory Transform Data Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-script.md
Previously updated : 02/28/2022 Last updated : 04/20/2022 # Transform data by using the Script activity in Azure Data Factory or Synapse Analytics
data-factory Tutorial Data Flow Adventure Works Retail Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-adventure-works-retail-template.md
AdventureWorks is a fictional sports equipment retailer that is used to demo Mic
## Prerequisites * **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
-* **Azure Synapse workspace**. [Create an Azure Synapse Workspace](../storage/common/storage-account-create.md) if you don't have one already.
+* **Azure Synapse workspace**. [Create an Azure Synapse Workspace](../synapse-analytics/get-started-create-workspace.md) if you don't have one already.
## Find the template
digital-twins Concepts Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-history.md
Once twin property values are historized to Azure Data Explorer, you can run joint queries using the [Azure Digital Twins plugin for Azure Data Explorer](concepts-data-explorer-plugin.md) to reason across digital twins, their relationships, and time series data to gain insights into the behavior of modeled environments. You can also use these queries to power operational dashboards, enrich 2D and 3D web applications, and drive immersive augmented/mixed reality experiences to convey the current and historical state of assets, processes, and people modeled in Azure Digital Twins.
+For more of an introduction to data history, including a quick demo, watch the following IoT show video:
+
+<iframe src="https://aka.ms/docs/player?id=2f9a9af4-1556-44ea-ab5f-afcfd6eb9c15" width="1080" height="530"></iframe>
+ ## Required resources and data flow Data history requires the following resources:
digital-twins How To Use Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-data-history.md
This article shows how to set up a working data history connection between Azure
It also contains a sample twin graph that you can use to see the historized twin property updates in Azure Data Explorer.
->[!NOTE]
->You can also work with data history using the [2021-06-30-preview](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/preview/2021-06-30-preview) version of the rest APIs. That process isn't shown in this article.
+>[!TIP]
+>Although this article uses the Azure portal, you can also work with data history using the [2021-06-30-preview](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/preview/2021-06-30-preview) version of the rest APIs.
## Prerequisites
dms Ads Sku Recommend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/ads-sku-recommend.md
Title: Get right-sized Azure recommendation for your on-premises SQL Server database(s)
-description: Learn how to use the Azure SQL Migration extension in Azure Data Studio to get SKU recommendation to migrate SQL Server database(s) to the right-sized Azure SQL Managed Instance or SQL Server on Azure Virtual Machines.
+description: Learn how to use the Azure SQL migration extension in Azure Data Studio to get SKU recommendation to migrate SQL Server database(s) to the right-sized Azure SQL Managed Instance or SQL Server on Azure Virtual Machines.
# Get right-sized Azure recommendation for your on-premises SQL Server database(s)
-The [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) provides a unified experience to assess, get right-sized Azure recommendations and migrate your SQL Server database(s) to Azure.
+The [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) provides a unified experience to assess, get right-sized Azure recommendations and migrate your SQL Server database(s) to Azure.
Before migrating your SQL Server databases to Azure, it is important to assess them to identify any migration issues (if any) so you can remediate them and confidently migrate them to Azure. Moreover, it is equally important to identify the right-sized configuration in Azure to ensure your database workload performance requirements are met with minimal cost.
-The Azure SQL Migration extension for Azure Data Studio provides both the assessment and SKU recommendation (right-sized Azure recommended configuration) capabilities when you are trying to select the best option to migrate your SQL Server database(s) to Azure SQL Managed Instance or SQL Server on Azure Virtual Machines. The extension provides a user friendly interface to run the assessment and generate recommendations within a short timeframe.
+The Azure SQL migration extension for Azure Data Studio provides both the assessment and SKU recommendation (right-sized Azure recommended configuration) capabilities when you are trying to select the best option to migrate your SQL Server database(s) to Azure SQL Managed Instance or SQL Server on Azure Virtual Machines. The extension provides a user friendly interface to run the assessment and generate recommendations within a short timeframe.
> [!NOTE]
-> Assessment and Azure recommendation feature in the Azure SQL Migration extension for Azure Data Studio also supports source SQL Server running on Linux.
+> Assessment and Azure recommendation feature in the Azure SQL migration extension for Azure Data Studio also supports source SQL Server running on Linux.
## Performance data collection and SKU recommendation
-With the Azure SQL Migration extension, you can get a right-sized Azure recommendation to migrate your SQL Server databases to Azure SQL Managed Instance or SQL Server on Azure Virtual Machines. The extension collects and analyzes performance data from your SQL Server instance to generate a recommended SKU each for Azure SQL Managed Instance and SQL Server on Azure Virtual Machines that meets your database(s)' performance characteristics with the lowest cost.
+With the Azure SQL migration extension, you can get a right-sized Azure recommendation to migrate your SQL Server databases to Azure SQL Managed Instance or SQL Server on Azure Virtual Machines. The extension collects and analyzes performance data from your SQL Server instance to generate a recommended SKU each for Azure SQL Managed Instance and SQL Server on Azure Virtual Machines that meets your database(s)' performance characteristics with the lowest cost.
The workflow for data collection and SKU recommendation is illustrated below.
The workflow for data collection and SKU recommendation is illustrated below.
1. **Performance data collection**: To start the performance data collection process in the migration wizard, select **Get Azure recommendation** and choose the option to collect performance data as shown below. Provide the folder where the collected data will be saved and select **Start**. :::image type="content" source="media/ads-sku-recommend/collect-performance-data.png" alt-text="Collect performance data for SKU recommendation":::
- When you start the data collection process in the migration wizard, the Azure SQL Migration extension for Azure Data Studio collects data from your SQL Server instance that includes information about the hardware configuration and aggregated SQL Server specific performance data from system Dynamic Management Views (DMVs) such as CPU utilization, memory utilization, storage size, IO, throughput and IO latency.
+ When you start the data collection process in the migration wizard, the Azure SQL migration extension for Azure Data Studio collects data from your SQL Server instance that includes information about the hardware configuration and aggregated SQL Server specific performance data from system Dynamic Management Views (DMVs) such as CPU utilization, memory utilization, storage size, IO, throughput and IO latency.
> [!IMPORTANT] > - The data collection process runs for 10 minutes to generate the first recommendation. It is important to start the data collection process when your database workload reflects usage close to your production scenarios.</br> > - After the first recommendation is generated, you can continue to run the data collection process to refine recommendations especially if your usage patterns vary for an extended duration of time.
The workflow for data collection and SKU recommendation is illustrated below.
> - return to start the data collection again from the migration wizard; ### Import existing performance data
-Any existing Performance data that you collected previously using the Azure SQL Migration extension or [using the console application in Data Migration Assistant](/sql/dma/dma-sku-recommend-sql-db) can be imported in the migration wizard to view the recommendation.</br>
+Any existing Performance data that you collected previously using the Azure SQL migration extension or [using the console application in Data Migration Assistant](/sql/dma/dma-sku-recommend-sql-db) can be imported in the migration wizard to view the recommendation.</br>
Simply provide the folder location where the performance data files are saved and select **Start** to instantly view the recommendation and its details.</br> :::image type="content" source="media/ads-sku-recommend/import-sku-data.png" alt-text="Import performance data for SKU recommendation"::: ## Prerequisites The following prerequisites are required to get Azure recommendation: * [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
+* [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission. ## Next steps -- For an overview of the architecture to migrate databases, see [Migrate databases with Azure SQL Migration extension for Azure Data Studio](migration-using-azure-data-studio.md).
+- For an overview of the architecture to migrate databases, see [Migrate databases with Azure SQL migration extension for Azure Data Studio](migration-using-azure-data-studio.md).
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
Title: Migrate databases at scale using Azure PowerShell / CLI
-description: Learn how to use Azure PowerShell or CLI to migrate databases at scale using the capabilities of Azure SQL Migration extension in Azure Data Studio with Azure Database Migration Service.
+description: Learn how to use Azure PowerShell or CLI to migrate databases at scale using the capabilities of Azure SQL migration extension in Azure Data Studio with Azure Database Migration Service.
# Migrate databases at scale using automation (Preview)
-The [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) enables you to assess, get Azure recommendations and migrate your SQL Server databases to Azure. Using automation with [Azure PowerShell](/powershell/module/az.datamigration) or [Azure CLI](/cli/azure/datamigration), you can leverage the capabilities of the extension with Azure Database Migration Service to migrate one or more databases at scale (including databases across multiple SQL Server instances).
+The [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) enables you to assess, get Azure recommendations and migrate your SQL Server databases to Azure. Using automation with [Azure PowerShell](/powershell/module/az.datamigration) or [Azure CLI](/cli/azure/datamigration), you can leverage the capabilities of the extension with Azure Database Migration Service to migrate one or more databases at scale (including databases across multiple SQL Server instances).
The following sample scripts can be referenced to suit your migration scenario using Azure PowerShell or Azure CLI:
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Title: Migrate using Azure Data Studio
-description: Learn how to use the Azure SQL Migration extension in Azure Data Studio to migrate databases with Azure Database Migration Service.
+description: Learn how to use the Azure SQL migration extension in Azure Data Studio to migrate databases with Azure Database Migration Service.
Last updated 02/22/2022
-# Migrate databases with Azure SQL Migration extension for Azure Data Studio
+# Migrate databases with Azure SQL migration extension for Azure Data Studio
-The [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) enables you to assess, get Azure recommendations and migrate your SQL Server databases to Azure.
+The [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) enables you to assess, get Azure recommendations and migrate your SQL Server databases to Azure.
-The key benefits of using the Azure SQL Migration extension for Azure Data Studio are:
-1. Assess your SQL Server databases for Azure readiness or to identify any migration blockers before migrating them to Azure. You can assess SQL Server databases running on both Windows and Linux Operating System using the Azure SQL Migration extension.
+The key benefits of using the Azure SQL migration extension for Azure Data Studio are:
+1. Assess your SQL Server databases for Azure readiness or to identify any migration blockers before migrating them to Azure. You can assess SQL Server databases running on both Windows and Linux Operating System using the Azure SQL migration extension.
1. Get right-sized Azure recommendation based on performance data collected from your source SQL Server databases. To learn more, see [Get right-sized Azure recommendation for your on-premises SQL Server database(s)](ads-sku-recommend.md). 1. Perform online (minimal downtime) and offline database migrations using an easy-to-use wizard. To see step-by-step tutorial, see sample [Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using Azure Data Studio with DMS](tutorial-sql-server-managed-instance-online-ads.md). 1. Monitor all migrations started in Azure Data Studio from the Azure portal. To learn more, see [Monitor database migration progress from the Azure portal](#monitor-database-migration-progress-from-the-azure-portal).
-1. Leverage the capabilities of the Azure SQL Migration extension to assess and migrate databases at scale using automation with Azure PowerShell and Azure CLI. To learn more, see [Migrate databases at scale using automation](migration-dms-powershell-cli.md).
+1. Leverage the capabilities of the Azure SQL migration extension to assess and migrate databases at scale using automation with Azure PowerShell and Azure CLI. To learn more, see [Migrate databases at scale using automation](migration-dms-powershell-cli.md).
-## Architecture of Azure SQL Migration extension for Azure Data Studio
+## Architecture of Azure SQL migration extension for Azure Data Studio
Azure Database Migration Service (DMS) is one of the core components in the overall architecture. DMS provides a reliable migration orchestrator to enable database migrations to Azure SQL.
-Create or reuse an existing DMS using the Azure SQL Migration extension in Azure Data Studio(ADS).
+Create or reuse an existing DMS using the Azure SQL migration extension in Azure Data Studio (ADS).
DMS uses Azure Data Factory's self-hosted integration runtime to access and upload valid backup files from your on-premises network share or your Azure Storage account. The workflow of the migration process is illustrated below.
The workflow of the migration process is illustrated below.
1. **Source SQL Server**: SQL Server instance on-premises, private cloud, or any public cloud virtual machine. All versions of SQL Server 2008 and above are supported. 1. **Target Azure SQL**: Supported Azure SQL targets are Azure SQL Managed Instance or SQL Server on Azure Virtual Machines (registered with SQL IaaS Agent extension in [Full management mode](../azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md#management-modes)) 1. **Network File Share**: Server Message Block (SMB) network file share where backup files are stored for the database(s) to be migrated. Azure Storage blob containers and Azure Storage file share are also supported.
-1. **Azure Data Studio**: Download and install the [Azure SQL Migration extension in Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+1. **Azure Data Studio**: Download and install the [Azure SQL migration extension in Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
1. **Azure DMS**: Azure service that orchestrates migration pipelines to do data movement activities from on-premises to Azure. DMS is associated with Azure Data Factory's (ADF) self-hosted integration runtime (IR) and provides the capability to register and monitor the self-hosted IR. 1. **Self-hosted integration runtime (IR)**: Self-hosted IR should be installed on a machine that can connect to the source SQL Server and the backup files location. DMS provides the authentication keys and registers the self-hosted IR. 1. **Backup files upload to Azure Storage**: DMS uses self-hosted IR to upload valid backup files from the on-premises backup location to your provisioned Azure Storage account. Data movement activities and pipelines are automatically created in the migration workflow to upload the backup files.
The workflow of the migration process is illustrated below.
Azure Database Migration Service prerequisites that are common across all supported migration scenarios include the need to: * [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
+* [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
* Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
Azure Database Migration Service prerequisites that are common across all suppor
- Configure self-hosted integration runtime to auto-update to automatically apply any new features, bug fixes, and enhancements that are released. To learn more, see [Self-hosted Integration Runtime Auto-update](../data-factory/self-hosted-integration-runtime-auto-update.md). ## Monitor database migration progress from the Azure portal
-When you migrate database(s) using the Azure SQL Migration extension for Azure Data Studio, the migrations are orchestrated by the Azure Database Migration Service that was selected in the wizard. To monitor database migrations from the Azure portal,
+When you migrate database(s) using the Azure SQL migration extension for Azure Data Studio, the migrations are orchestrated by the Azure Database Migration Service that was selected in the wizard. To monitor database migrations from the Azure portal,
- Open the [Azure portal](https://portal.azure.com/) - Search for your Azure Database Migration Service by the resource name :::image type="content" source="media/migration-using-azure-data-studio/search-dms-portal.png" alt-text="Search Azure Database Migration Service resource in portal":::
When you migrate database(s) using the Azure SQL Migration extension for Azure D
- When migrating to SQL Server on Azure Virtual Machines, SQL Server 2014 and below as target versions are not supported currently. - Migrating to Azure SQL Database isn't supported. - Azure storage accounts secured by specific firewall rules or configured with a private endpoint are not supported for migrations.-- You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL Migration extension in Azure Data Studio and can be reused for further database migrations.
+- You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL migration extension in Azure Data Studio and can be reused for further database migrations.
## Pricing-- Azure Database Migration Service is free to use with the Azure SQL Migration extension in Azure Data Studio. You can migrate multiple SQL Server databases using the Azure Database Migration Service at no charge for using the service or the Azure SQL Migration extension.
+- Azure Database Migration Service is free to use with the Azure SQL migration extension in Azure Data Studio. You can migrate multiple SQL Server databases using the Azure Database Migration Service at no charge for using the service or the Azure SQL migration extension.
- There's no data movement or data ingress cost for migrating your databases from on-premises to Azure. If the source database is moved from another region or an Azure VM, you may incur [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) based on your bandwidth provider and routing scenario. - Provide your own machine or on-premises server to install Azure Data Studio. - A self-hosted integration runtime is needed to access database backups from your on-premises network share. ## Regional Availability
-For the list of Azure regions that support database migrations using the Azure SQL Migration extension for Azure Data studio (powered by Azure DMS), see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration)
+For the list of Azure regions that support database migrations using the Azure SQL migration extension for Azure Data studio (powered by Azure DMS), see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration)
## Next steps -- For an overview and installation of the Azure SQL Migration extension, see [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+- For an overview and installation of the Azure SQL migration extension, see [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
Last updated 10/05/2021
# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using Azure Data Studio with DMS
-You can use the Azure SQL Migration extension in Azure Data Studio to migrate the database(s) from a SQL Server instance to Azure SQL Managed Instance. For methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
+You can use the Azure SQL migration extension in Azure Data Studio to migrate the database(s) from a SQL Server instance to Azure SQL Managed Instance. For methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to Azure SQL Managed Instance by using Azure Data Studio with Azure Database Migration Service (DMS). This tutorial focuses on the offline migration mode that considers an acceptable downtime during the migration process.
This article describes an offline migration from SQL Server to a SQL Managed Ins
To complete this tutorial, you need to: * [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
+* [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
* Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
Last updated 10/05/2021
# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using Azure Data Studio with DMS
-Use the Azure SQL Migration extension in Azure Data Studio to migrate database(s) from a SQL Server instance to an [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
+Use the Azure SQL migration extension in Azure Data Studio to migrate database(s) from a SQL Server instance to an [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to Azure SQL Managed Instance with minimal downtime by using Azure Data Studio with Azure Database Migration Service (DMS). This tutorial focuses on the online migration mode where application downtime is limited to a short cutover at the end of the migration.
This article describes an online database migration from SQL Server to Azure SQL
To complete this tutorial, you need to: * [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
+* [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
* Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Last updated 10/05/2021
# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine offline using Azure Data Studio with DMS
-Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine (SQL Server 2016 and above)](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
+Use the Azure SQL migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine (SQL Server 2016 and above)](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with the offline migration method by using Azure Data Studio with Azure Database Migration Service.
This article describes an offline migration from SQL Server to a SQL Server on A
To complete this tutorial, you need to: * [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
+* [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
* Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
Last updated 10/05/2021
# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine online using Azure Data Studio with DMS
-Use the Azure SQL Migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine (SQL Server 2016 and above)](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
+Use the Azure SQL migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine (SQL Server 2016 and above)](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](../azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md).
In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with minimal downtime by using Azure Data Studio with Azure Database Migration Service.
This article describes an online migration from SQL Server to a SQL Server on Az
To complete this tutorial, you need to: * [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
+* [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
* Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
dns Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-overview.md
For more information, see [Overview of Azure DNS alias records](dns-alias.md).
* To learn how to create a zone in Azure DNS, see [Create a DNS zone](./dns-getstarted-portal.md). * For frequently asked questions about Azure DNS, see the [Azure DNS FAQ](dns-faq.yml).+
+* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
dns Private Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-overview.md
For pricing information, see [Azure DNS Pricing](https://azure.microsoft.com/pri
* Learn about DNS zones and records by visiting [DNS zones and records overview](dns-zones-records.md). * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.+
+* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
expressroute Expressroute Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-introduction.md
Subscribe to the RSS feed and view the latest ExpressRoute feature updates on th
## Next steps * Ensure that all prerequisites are met. See [ExpressRoute prerequisites](expressroute-prerequisites.md).
+* [Learn module: Introduction to Azure ExpressRoute](/learn/modules/intro-to-azure-expressroute).
* Learn about [ExpressRoute connectivity models](expressroute-connectivity-models.md). * Find a service provider. See [ExpressRoute partners and peering locations](expressroute-locations.md).
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 03/08/2022 Last updated : 04/19/2022 ++ # Azure Policy built-in initiative definitions
hdinsight Apache Spark Load Data Run Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-load-data-run-query.md
Jupyter Notebook is an interactive notebook environment that supports various pr
1. Edit the URL `https://SPARKCLUSTER.azurehdinsight.net/jupyter` by replacing `SPARKCLUSTER` with the name of your Spark cluster. Then enter the edited URL in a web browser. If prompted, enter the cluster login credentials for the cluster.
-2. From the Jupyter web page, Select **New** > **PySpark** to create a notebook.
+2. From the Jupyter web page, **For the Spark 2.4** clusters, Select **New** > **PySpark** to create a notebook. **For the Spark 3.1** release, select **New** > **PySpark3** instead to create a notebook because the PySpark kernel is no longer available in Spark 3.1.
:::image type="content" source="./media/apache-spark-load-data-run-query/hdinsight-spark-create-jupyter-interactive-spark-sql-query.png " alt-text="Create a Jupyter Notebook to run interactive Spark SQL query" border="true"::: A new notebook is created and opened with the name Untitled(`Untitled.ipynb`). > [!NOTE]
- > By using the PySpark kernel to create a notebook, the `spark` session is automatically created for you when you run the first code cell. You do not need to explicitly create the session.
+ > By using the PySpark or the PySpark3 kernel to create a notebook, the `spark` session is automatically created for you when you run the first code cell. You do not need to explicitly create the session.
+ ## Create a dataframe from a csv file
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
Title: Configure import settings in the FHIR service - Azure Health Data Services
-description: This article describes how to configure import settings in the FHIR service
+description: This article describes how to configure import settings in the FHIR service.
Previously updated : 04/16/2022 Last updated : 04/20/2022
-# Configure bulk import settings (Preview)
+# Configure bulk-import settings (Preview)
The FHIR service supports $import operation that allows you to import data into FHIR service account from a storage account.
After you've completed this final step, you're ready to import data using $impor
In this article, you've learned the FHIR service supports $import operation and how it allows you to import data into FHIR service account from a storage account. You also learned about the three steps used in configuring import settings in the FHIR service. For more information about converting data to FHIR, exporting settings to set up a storage account, and moving data to Azure Synapse, see
+>[!div class="nextstepaction"]
+>[Use $import](import-data.md)
+ >[!div class="nextstepaction"] >[Converting your data to FHIR](convert-data.md)
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
Title: Executing the import by invoking $import operation on FHIR service in Azure Health Data Services
-description: This article describes how to import FHIR data using $import
+description: This article describes how to import FHIR data using $import.
Previously updated : 04/16/2022 Last updated : 04/20/2022
-# Bulk import FHIR data (Preview)
+# Bulk-import FHIR data (Preview)
-The Bulk import feature enables importing FHIR data to the FHIR server at high throughput using the $import operation. This feature is suitable for initial data load into the FHIR server.
+The bulk-import feature enables importing FHIR data to the FHIR server at high throughput using the $import operation. This feature is suitable for initial data load into the FHIR server.
+
+> [!NOTE]
+> You must have the **FHIR Data Contributor** role on the FHIR server to use $import.
## Current limitations
The Bulk import feature enables importing FHIR data to the FHIR server at high t
## Using $import operation
-To use $import, you'll need to configure the FHIR server using the instructions in the [Configure bulk import settings](configure-import-data.md) article and set the **initialImportMode** to *true*. Doing so also suspends write operations (POST and PUT) on the FHIR server. You should set the **initialImportMode** to *false* to reenable write operations after you have finished importing your data.
+To use $import, you'll need to configure the FHIR server using the instructions in the [Configure bulk-import settings](configure-import-data.md) article and set the **initialImportMode** to *true*. Doing so also suspends write operations (POST and PUT) on the FHIR server. You should set the **initialImportMode** to *false* to reenable write operations after you have finished importing your data.
The FHIR data to be imported must be stored in resource specific files in FHIR NDJSON format on the Azure blob store. All the resources in a file must be of the same type. You may have multiple files per resource type.
The FHIR data to be imported must be stored in resource specific files in FHIR N
Make a ```POST``` call to ```<<FHIR service base URL>>/$import``` with the following required headers and body, which contains a FHIR [Parameters](http://hl7.org/fhir/parameters.html) resource.
-As `$import` is an async operation, a **callback** link will be returned in the `Content-location` header of the response together with ```202-Accepted``` status code. You can use this callback link to check import status.
+An empty response body with a **callback** link will be returned in the `Content-location` header of the response together with ```202-Accepted``` status code. You can use this callback link to check the import status.
#### Request Header
Below are some of the important fields in the response body:
| Field | Description | | -- | -- |
-|transactionTime|Start time of the bulk import operation.|
+|transactionTime|Start time of the bulk-import operation.|
|output.count|Count of resources that were successfully imported| |error.count|Count of resources that weren't imported due to some error| |error.url|URL of the file containing details of the error. Each error.url is unique to an input URL. |
Below are some of the important fields in the response body:
] } ```
+## Troubleshooting
+
+Below are some error codes you may encounter and the solutions to help you resolve them.
+
+### 200 OK, but there's an error with the URL in the response
+
+**Behavior:** Import operation succeeds and returns ```200 OK```. However, `error.url` are present in the response body. Files present at the `error.url` location contain JSON fragments like in the example below:
+
+```json
+{
+ "resourceType": "OperationOutcome",
+ "issue": [
+ {
+ "severity": "error",
+ "details": {
+ "text": "Given conditional reference '{0}' does not resolve to a resource."
+ },
+ "diagnostics": "Failed to process resource at line: {1}"
+ }
+ ]
+}
+```
+
+**Cause:** NDJSON files contain resources with conditional references, which are currently not supported by $import.
+
+**Solution:** Replace the conditional references to normal references in the NDJSON files.
+
+### 400 Bad Request
+
+**Behavior:** Import operation failed and ```400 Bad Request``` is returned. Response body has the following content:
+
+```json
+{
+ "resourceType": "OperationOutcome",
+ "id": "13876ec9-3170-4525-87ec-9e165052d70d",
+ "issue": [
+ {
+ "severity": "error",
+ "code": "processing",
+ "diagnostics": "import operation failed for reason: No such host is known. (example.blob.core.windows.net:443)"
+ }
+ ]
+}
+```
+
+**Solution:** Verify the link to the Azure storage is correct. Check the network and firewall settings to make sure that the FHIR server is able to access the storage. If your service is in a VNet, ensure that the storage is in the same VNet or in a VNet that has peering with the FHIR service VNet.
+
+### 403 Forbidden
+
+**Behavior:** Import operation failed and ```403 Forbidden``` is returned. The response body has the following content:
+
+```json
+{
+ "resourceType": "OperationOutcome",
+ "id": "bd545acc-af5d-42d5-82c3-280459125033",
+ "issue": [
+ {
+ "severity": "error",
+ "code": "processing",
+ "diagnostics": "import operation failed for reason: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature."
+ }
+ ]
+}
+```
+
+**Cause:** We use managed identity for source storage auth. This error may be caused by a missing or wrong role assignment.
+
+**Solution:** Assign _Storage Blob Data Contributor_ role to the FHIR server following [the RBAC guide.](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current)
+
+### 500 Internal Server Error
+
+**Behavior:** Import operation failed and ```500 Internal Server Error``` is returned. Response body has the following content:
+
+```json
+{
+ "resourceType": "OperationOutcome",
+ "id": "0d0f007d-9e8e-444e-89ed-7458377d7889",
+ "issue": [
+ {
+ "severity": "error",
+ "code": "processing",
+ "diagnostics": "import operation failed for reason: The database '****' has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions."
+ }
+ ]
+}
+```
+
+**Cause:** You've reached the storage limit of the FHIR service.
+
+**Solution:** Reduce the size of your data or consider Azure API for FHIR, which has a higher storage limit.
## Next steps
healthcare-apis Register Application Cli Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application-cli-rest.md
To complete the application registration process, you'll need to create a servic
``` ###Create an AAD service principal
-spid=(az ad sp create --id $clientid --query objectId --output tsv)
+spid=$(az ad sp create --id $clientid --query objectId --output tsv)
###Look up a service principal spid=$(az ad sp show --id $clientid --query objectId --output tsv) ```
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
In an IoT Central application, you can view and analyze data for individual devi
### Secure your solution
-In an IoT Central application you can manage the following security aspects of your solution:
+In IoT Central, you can configure and manage security in the following areas:
-- [Device authentication](concepts-device-authentication.md): Create, revoke, and update the security keys that your devices use to establish a connection to your application.-- [App integrations](howto-authorize-rest-api.md#get-an-api-token): Create, revoke, and update the security keys that other applications use to establish secure connections with your application.-- [Data export](howto-export-data.md#connection-options): Use managed identities to secure the connection to your data export destinations.-- [User management](howto-manage-users-roles.md): Manage the users that can sign in to the application and the roles that determine what permissions those users have.-- [Organizations](howto-create-organizations.md): Define a hierarchy to manage which users can see which devices in your IoT Central application.
+- User access to your application.
+- Device access to your application.
+- Programmatic access to your application.
+- Authentication to other services from your application.
+
+To learn more, see the [IoT Central security guide](overview-iot-central-security.md).
## Devices
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
This article describes the types of token you can use in the authorization heade
## Token types
-You'll want to use the user bearer token when you're doing some automation/testing/API calls yourself; you'll want to use the SPN bearer token when you're automating/scripting your development environment (i.e. devops). The API token can be used for both cases, but has the risk of expiry and leaks, so we recommend using bearer whenever possible. Does that make sense?
- To access an IoT Central application using the REST API, you can use an: - _Azure Active Directory bearer token_. A bearer token is associated with an Azure Active Directory user account or service principal. The token grants the caller the same permissions the user or service principal has in the IoT Central application. - IoT Central API token. An API token is associated with a role in your IoT Central application.
+Use a bearer token associated with your user account while you're developing and testing automation and scripts that use the REST API. Use a bearer token that's associated with a service principal for production automation and scripts. Use a bearer token in preference to an API token to reduce the risk of leaks and problems when tokens expire.
+ To learn more about users and roles in IoT Central, see [Manage users and roles in your IoT Central application](howto-manage-users-roles.md). ## Get a bearer token
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
An IoT Central application lets you monitor and manage millions of devices throu
IoT Central application administration includes the following tasks: - Create applications-- Manage users and roles in the application.-- Create and manage organizations.-- Manage security such as device authentication.
+- Manage security
- Configure application settings. - Upgrade applications. - Export and share applications.
Azure IoT Central is an industry agnostic application platform. Application temp
To learn more, see [Create a retail application](../retail/tutorial-in-store-analytics-create-app.md) as an example.
-## Users and roles
+## Manage security
-IoT Central uses a role-based access control system to manage user permissions within an application. An administrator is responsible for adding users to an application and assigning them to roles. IoT Central has three built-in roles for app administrators, app builders, and app operators. An administrator can create custom roles with specific sets of permissions.
+In IoT Central, you can configure and manage security in the following areas:
-To learn more, see [Manage users and roles in your IoT Central application](howto-manage-users-roles.md).
+- User access to your application.
+- Device access to your application.
+- Programmatic access to your application.
+- Authentication to other services from your application.
-## Organizations
-
-To manage which users see which devices in your IoT Central application, use an _organization_ hierarchy. When you define an organization in your application, there are three new built-in roles: _organization administrators_, _organization operators_ and _organization viewers_. The user's role in application determines their permissions over the devices they can see.
-
-To learn more, see [Create an IoT Central organization](howto-create-organizations.md).
-
-## Application security
-
-Devices that connect to your IoT Central application typically use X.509 certificates or shared access signatures (SAS) as credentials. An administrator manages the group certificates or keys that these device credentials are derived from. To learn more, see:
--- [X.509 group enrollment](concepts-device-authentication.md#x509-enrollment-group)-- [SAS group enrollment](concepts-device-authentication.md#sas-enrollment-group)-- [How to roll X.509 device certificates](how-to-connect-devices-x509.md).-
-An administrator can also create and manage the API tokens that a client application uses to authenticate with your IoT Central application. Client applications use the REST API to interact with IoT Central. To learn more, see:
--- [Get an API token](howto-authorize-rest-api.md#get-an-api-token)-
-For data exports, an administrator can configure [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to secure the connections to the [export destinations](howto-export-data.md). To learn more, see:
--- [Configure a managed identity](howto-manage-iot-central-from-portal.md#configure-a-managed-identity)
+To learn more, see the [IoT Central security guide](overview-iot-central-security.md).
## Configure an application
Many of the tools you use as an administrator are available in the **Security**
## Next steps
-Now that you've learned about how to administer your Azure IoT Central application, the suggested next step is to learn about [Manage users and roles](howto-manage-users-roles.md) in Azure IoT Central.
+Now that you've learned about how to administer your Azure IoT Central application, the suggested next step is to learn about [Security in Azure IoT Central](overview-iot-central-security.md).
iot-central Overview Iot Central Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-security.md
+
+ Title: Azure IoT Central application security guide
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to secure your IoT Central application. IoT Central security includes users, devices, API access, and authentication to other services for data export.
++ Last updated : 04/12/2022+++++
+# This article applies to administrators.
++
+# IoT Central security guide
+
+An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is for administrators who manage security in IoT Central applications.
+
+In IoT Central, you can configure and manage security in the following areas:
+
+- User access to your application.
+- Device access to your application.
+- Programmatic access to your application.
+- Authentication to other services from your application.
+
+## Manage user access
+
+Every user must have a user account before they can sign in and access an IoT Central application. IoT Central currently supports Microsoft accounts and Azure Active Directory accounts, but not Azure Active Directory groups.
+
+*Roles* enable you to control who within your organization is allowed to do various tasks in IoT Central. Each role has a specific set of permissions that determine what a user in the role can see and do in the application. There are three built-in roles you can assign to users of your application. You can also create custom roles with specific permissions if you require finer-grained control.
+
+*Organizations* let you define a hierarchy that you use to manage which users can see which devices in your IoT Central application. The user's role determines their permissions over the devices they see, and the experiences they can access. Use organizations to implement a multi-tenanted application.
+
+To learn more, see:
+
+- [Manage users and roles in your IoT Central application](howto-manage-users-roles.md)
+- [Manage IoT Central organizations](howto-create-organizations.md)
+- [How to use the IoT Central REST API to manage users and roles](howto-manage-users-roles-with-rest-api.md)
+- [How to use the IoT Central REST API to manage organizations](howto-manage-organizations-with-rest-api.md)
+
+## Manage device access
+
+Devices authenticate with the IoT Central application by using either a *shared access signature (SAS) token* or an *X.509 certificate*. X.509 certificates are recommended in production environments.
+
+In IoT Central, you use *device connection groups* to manage the device authentication options in your IoT Central application.
+
+To learn more, see:
+
+- [Device authentication concepts in IoT Central](concepts-device-authentication.md)
+- [How to connect devices with X.509 certificates to an IoT Central application](how-to-connect-devices-x509.md)
+
+### Network controls for device access
+
+By default, devices connect to IoT Central over the public internet. For more security, connect your devices to your IoT Central application by using a *private endpoint* in an Azure Virtual Network.
+
+Private endpoints use private IP addresses from a virtual network address space to connect your devices privately to your IoT Central application. Network traffic between devices on the virtual network and the IoT platform traverses the virtual network and a private link on the Microsoft backbone network, eliminating exposure on the public internet.
+
+To learn more, see [Network security for IoT Central using private endpoints](concepts-private-endpoints.md).
+
+## Manage programmatic access
+
+The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. Use the REST API to work with resources in your IoT Central application such as device templates, devices, jobs, users, and roles.
+
+Every IoT Central REST API call requires an authorization header that IoT Central uses to determine the identity of the caller and the permissions that caller is granted within the application.
+
+To access an IoT Central application using the REST API, you can use an:
+
+- *Azure Active Directory bearer token*. A bearer token is associated with either an Azure Active Directory user account or a service principal. The token grants the caller the same permissions the user or service principal has in the IoT Central application.
+- IoT Central API token. An API token is associated with a role in your IoT Central application.
+
+To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md).
+
+## Authenticate to other services
+
+When you configure a continuous data export from your IoT Central application to Azure Blob storage, Azure Service Bus, or Azure Event Hubs, you can use either a connection string or a managed identity to authenticate. When you configure a continuous data export from your IoT Central application to Azure Data Explorer, you can use either a service principal or a managed identity to authenticate.
+
+Managed identities are more secure because:
+
+- You don't store the credentials for your resource in a connection string in your IoT Central application.
+- The credentials are automatically tied to the lifetime of your IoT Central application.
+- Managed identities automatically rotate their security keys regularly.
+
+To learn more, see:
+
+- [Export IoT data to cloud destinations using data export](howto-export-data.md)
+- [Configure a managed identity in the Azure portal](howto-manage-iot-central-from-portal.md#configure-a-managed-identity)
+- [Configure a managed identity using the Azure CLI](howto-manage-iot-central-from-cli.md#configure-a-managed-identity)
+
+## Next steps
+
+Now that you've learned about security in your Azure IoT Central application, the suggested next step is to learn about [Manage users and roles](howto-manage-users-roles.md) in Azure IoT Central.
iot-dps Monitor Iot Dps Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps-reference.md
For more information on what metric dimensions are, see [Multi-dimensional metri
This section lists the types of resource logs you can collect for DPS.
-Resource Provider and Type: [Microsoft.Devices/provisioningServices](/azure/azure-monitor/essentials/resource-logs-categories#microsoftdevicesprovisioningservices).
+Resource Provider and Type: [Microsoft.Devices/provisioningServices](../azure-monitor/essentials/resource-logs-categories.md#microsoftdevicesprovisioningservices).
| Category | Description | |:||
For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure
## Activity log
-For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
## See Also
iot-dps Monitor Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps.md
Last updated 04/15/2022
When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Azure IoT Hub Device Provisioning Service (DPS). DPS uses [Azure Monitor](/azure/azure-monitor/overview). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+This article describes the monitoring data generated by Azure IoT Hub Device Provisioning Service (DPS). DPS uses [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
## Monitoring data
-DPS collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+DPS collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
See [Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md) for detailed information on the metrics and logs created by DPS.
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for DPS with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool.
+You can analyze metrics for DPS with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
In Azure portal, you can select **Metrics** under **Monitoring** on the left-pane of your DPS instance to open metrics explorer scoped, by default, to the platform metrics emitted by your instance:
In Azure portal, you can select **Metrics** under **Monitoring** on the left-pan
For a list of the platform metrics collected for DPS, see [Metrics in the Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md#metrics).
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/essentials/metrics-supported).
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
## Analyzing logs
In Azure portal, you can select **Logs** under **Monitoring** on the left-pane o
:::image type="content" source="media/monitor-iot-dps/logs-portal.png" alt-text="Logs page for a Dps instance."::: > [!IMPORTANT]
-> When you select **Logs** from the DPS menu, Log Analytics is opened with the query scope set to the current DPS instance. This means that log queries will only include data from that resource. If you want to run a query that includes data from other DPS instances or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details.
+> When you select **Logs** from the DPS menu, Log Analytics is opened with the query scope set to the current DPS instance. This means that log queries will only include data from that resource. If you want to run a query that includes data from other DPS instances or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
Run queries against the **AzureDiagnostics** table to see the resource logs collected for the diagnostic settings you've created for your DPS instance.
Run queries against the **AzureDiagnostics** table to see the resource logs coll
AzureDiagnostics ```
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema) The schema for DPS resource logs is found in [Resource logs in the Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md#resource-logs).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md) The schema for DPS resource logs is found in [Resource logs in the Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md#resource-logs).
-The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
For a list of the types of resource logs collected for DPS, see [Resource logs in the Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md#resource-logs).
For a list of the tables used by Azure Monitor Logs and queryable by Log Analyti
## Alerts
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks.
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
## Next steps - See [Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md) for a reference of the metrics, logs, and other important values created by DPS. -- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
iot-edge How To Collect And Transport Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-collect-and-transport-metrics.md
You can remotely monitor your IoT Edge fleet using Azure Monitor and built-in me
> [!VIDEO https://aka.ms/docs/player?id=94a7d988-4a35-4590-9dd8-a511cdd68bee]
-<a href="https://aka.ms/docs/player?id=94a7d988-4a35-4590-9dd8-a511cdd68bee" target="_blank">IoT Edge integration with Azure Monitor</a>(4:06)
+<a href="/_themes/docs.theme/master/_themes/global/video-embed.html?id=94a7d988-4a35-4590-9dd8-a511cdd68bee" target="_blank">IoT Edge integration with Azure Monitor</a>(4:06)
## Architecture
To view the metrics from your IoT Edge device in your IoT Central application:
## Next steps
-Explore the types of [curated visualizations](how-to-explore-curated-visualizations.md) that Azure Monitor enables.
+Explore the types of [curated visualizations](how-to-explore-curated-visualizations.md) that Azure Monitor enables.
iot-edge How To Configure Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-iot-edge-for-linux-on-windows-networking.md
+
+ Title: Networking for Azure IoT Edge for Linux on Windows | Microsoft Docs
+description: Learn about how to configure custom networking for Azure IoT Edge for Linux on Windows virtual machine.
+++ Last updated : 03/21/2022+++++
+# Networking configuration for Azure IoT Edge for Linux on Windows
++
+This article will provide help you decide which networking option is best for your scenario and get insights into IoT Edge for Linux on Windows (EFLOW) configuration requirements.
+
+To connect the IoT Edge for Linux on Windows (EFLOW) virtual machine over a network to your host, to other virtual machines on your Windows host, and to other devices/locations on an external network, the virtual machine networking must be configured accordingly.
+
+The easiest way to establish basic networking on Windows Client SKUs is by using the **default switch**, which is already created when enabling the Windows Hyper-V feature. However, on Windows Server SKUs devices, networking it's a bit more complicated as there's no **default switch** available. For more information about virtual switch creation for Windows Server, see [Create virtual switch for Linux on Windows](./how-to-create-virtual-switch.md).
+
+For more information about EFLOW networking concepts, see [IoT Edge for Linux on Windows networking](./nested-virtualization.md).
+
+## Configure VM virtual switch
+
+The first step before deploying the EFLOW virtual machine, is to determine which type of virtual switch you'll use. For more information about EFLOW supported virtual switches, see [EFLOW virtual switch choices](./iot-edge-for-linux-on-windows-networking.md). Once you determine the type of virtual switch that you want to use, make sure to create the virtual switch correctly. For more information about virtual switch creation, see [Create a virtual switch for Hyper-V virtual machines](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines).
+
+>[!NOTE]
+> If you're using Windows client and you want to use the **default switch**, then no switch creation is needed and no `-vSwitchType` and `-vSwitchName` parameters are needed.
+
+>[!NOTE]
+> If you're using a Windows virtual machine inside VMware infrastructure and **external switch**, please see [EFLOW nested virtualization](./nested-virtualization.md).
+
+After creating the virtual switch and before starting your deployment, make sure that your virtual switch name and type is correctly set up and is listed under the Windows host OS. To list all the virtual switches in your Windows host OS, in an elevated PowerShell session, use the following PowerShell cmdlet:
+
+```powershell
+Get-VmSwitch
+```
+Depending on the virtual switches of the Windows host, the output should be similar to the following:
+
+```output
+Name SwitchType NetAdapterInterfaceDescription
+- -
+Default Switch Internal
+IntOff Internal
+EFLOW-Ext External
+```
+
+To use a specific virtual switch(**internal** or **external**), make sure you specify the correct parameters: `-vSwitchName` and `vSwitchType`. For example, if you want to deploy the EFLOW VM with an **external switch** named **EFLOW-Ext**, then in an elevated PowerShell session use the following command:
+
+```powershell
+Deploy-EflowVm -vSwitchType "External" -vSwitchName "EFLOW-Ext"
+```
++
+## Configure VM IP address allocation
+
+The second step after deciding the type of virtual switch you're using is to determine the type of IP address allocation of the virtual switch. For more information about IP allocation options, see [EFLOW supported IP allocations](./iot-edge-for-linux-on-windows-networking.md). Depending on the type of virtual switch you're using, make sure to use a supported IP address allocation mechanism.
+
+By default, if no **static IP** address is set up, the EFLOW VM will try to allocate an IP address to the virtual switch using **DHCP**. Make sure that there's a DHCP server on the virtual switch network; if not available, the EFLOW VM installation will fail to allocate an IP address and installation will fail. If you're using the **default switch**, then there's no need to check for a DHCP server, as the virtual switch already has DHCP by default. However, if using an **internal** or **external** virtual switch, you can check using the following steps:
+
+1. Open a command prompt.
+1. Display all the IP configurations and information
+ ```cmd
+ ipconfig /all
+ ```
+1. If you're using an **external** virtual switch, check the network interface used for creating the virtual switch. If you're using an **internal** virtual switch, just look for the name used for the switch. Once the switch is located, check if `DHCP Enabled` says **Yes** or **No**, and check the `DHCP server` address.
+
+If you're using a **static IP**, you'll have to specify three parameters during EFLOW deployment: `-ip4Address`, `ip4GatewayAddress` and `ip4PrefixLength`. If one parameter is missing or incorrect, the EFLOW VM installation will fail to allocate an IP address and installation will fail. For more information about EFLOW VM deployment, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md#deploy-eflow). For example, if you want to deploy the EFLOW VM with an **external switch** named **EFLOW-Ext**, and a static IP configuration, with an IP address equal to **192.168.0.2**, gateway IP address equal to **192.168.0.1** and IP prefix length equal to **24**, then in an elevated PowerShell session use the following command:
+
+```powershell
+Deploy-EflowVm -vSwitchType "External" -vSwitchName "EFLOW-Ext" -ip4Address "192.168.0.2" -ip4GatewayAddress "192.168.0.1" -ip4PrefixLength "24"
+```
+
+>[!TIP]
+> The EFLOW VM will keep the same MAC address for the main (used during deployment) virtual switch across reboots. If you are using DHCP MAC address reservation, you can get the main virtual switch MAC address using the PowerShell cmdlet: `Get-EflowVmAddr`.
+
+### Check IP allocation
+There are multiple ways to check the IP address that was allocated to the EFLOW VM. First, using an elevated PowerShell session, use the EFLOW cmdlet `Get-EflowVmAddr`. The output should be something similar to the following one:
+
+```output
+C:\> Get-EflowVmAddr
+
+[03/31/2022 12:54:31] Querying IP and MAC addresses from virtual machine (DESKTOP-EFLOW)
+
+ - Virtual machine MAC: 00:15:5d:4e:15:2c
+ - Virtual machine IP : 172.27.120.111 retrieved directly from virtual machine
+00:15:5d:4e:15:2c
+172.27.120.111
+```
+
+Another way, is using the `Connect-Eflow` cmdlet to remote into the VM, and then you can use the `ifconfig eth0` bash command, and check for the *eth0* interface. The output should be something similar to the following one:
+
+```output
+eth0 Link encap:Ethernet HWaddr 00:15:5d:4e:15:2c
+ inet addr:172.27.120.111 Bcast:172.27.127.255 Mask:255.255.240.0
+ inet6 addr: fe80::215:5dff:fe4e:152c/64 Scope:Link
+ UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
+ RX packets:5636 errors:0 dropped:0 overruns:0 frame:0
+ TX packets:2214 errors:0 dropped:0 overruns:0 carrier:0
+ collisions:0 txqueuelen:1000
+ RX bytes:766832 (766.8 KB) TX bytes:427274 (427.2 KB)
+```
+
+## Configure VM DNS servers
+
+By default, the EFLOW virtual machine has no DNS configuration. Deployments using **DHCP** will try to obtain the DNS configuration propagated by the DHCP server. If you're using a **static IP**, the DNS server needs to be set up manually. For more information about EFLOW VM DNS, see [EFLOW DNS configuration](./iot-edge-for-linux-on-windows-networking.md).
+
+To check the DNS servers used by the default interface (*eth0*), you can use the following command:
+
+```bash
+resolvectl | grep eth0 -A 8
+```
+
+The output should be something similar to the following. Check the IP addresses of the "Current DNS Servers" and "DNS Servers" fields of the list. If there's no IP address, or the IP address isn't a valid DNS server IP address, then the DNS service won't work.
+
+```output
+Link 2 (eth0)
+ Current Scopes: DNS
+ LLMNR setting: yes
+MulticastDNS setting: no
+ DNSOverTLS setting: no
+ DNSSEC setting: no
+ DNSSEC supported: no
+ Current DNS Server: 172.27.112.1
+ DNS Servers: 172.27.112.1
+```
+
+If you need to manually set up the DNS server addresses, you can use the EFLOW PowerShell cmdlet `Set-EflowVmDNSServers`. For more information about EFLOW VM DNS configuration, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md#set-eflowvmdnsservers).
+
+### Check DNS resolution
+There are multiple ways to check the DNS resolution.
+
+First, from inside the EFLOW VM, use the `resolvectl query` command to query a specific URL. For example, to check if the name resolution is working for the address _microsoft.com_, use the `resolvectl query microsoft.com` command. The output should be something similar to the following one:
+
+```output
+PS C:\> resolvectl query
+microsoft.com: 40.112.72.205
+ 40.113.200.201
+ 13.77.161.179
+ 104.215.148.63
+ 40.76.4.15
+
+-- Information acquired via protocol DNS in 1.9ms.
+-- Data is authenticated: no
+```
+
+Another way is using the `dig` command to query a specific URL. For example, to check if the name resolution is working for the address _microsoft.com_, use the `dig microsoft.com` command. The output should be something similar to the following one:
+
+```
+PS C:\> dig microsoft.com
+; <<>> DiG 9.16.22 <<>> microsoft.com
+;; global options: +cmd
+;; Got answer:
+;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36427
+;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1
+
+;; OPT PSEUDOSECTION:
+; EDNS: version: 0, flags:; udp: 65494
+;; QUESTION SECTION:
+;microsoft.com. IN A
+
+;; ANSWER SECTION:
+microsoft.com. 0 IN A 40.112.72.205
+microsoft.com. 0 IN A 40.113.200.201
+microsoft.com. 0 IN A 13.77.161.179
+microsoft.com. 0 IN A 104.215.148.63
+microsoft.com. 0 IN A 40.76.4.15
+
+;; Query time: 11 msec
+;; SERVER: 127.0
+```
+
+## Next steps
+
+Read more about [Azure IoT Edge for Linux on Windows Security](./iot-edge-for-linux-on-windows-security.md).
+
+Stay up-to-date with the latest [IoT Edge for Linux on Windows updates](./iot-edge-for-linux-on-windows.md)
iot-edge How To Publish Subscribe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-publish-subscribe.md
Because the clients are running on the same device as the MQTT broker in the exa
The [Azure IoT Device SDKs](https://github.com/Azure/azure-iot-sdks) already let clients perform IoT Hub operations, but they don't allow publishing or subscribing to user-defined topics. IoT Hub operations can be performed using any MQTT clients using publish and subscribe semantics as long as IoT Hub primitive protocols are respected. The next sections of this guide go through the specifics to illustrate how these protocols work.
-### Send telemetry data to IoT Hub
+### Send messages
-Sending telemetry data to IoT Hub is similar to publishing on a user-defined topic, but using a specific IoT Hub topic:
+Sending telemetry data to IoT Hub, other devices, or other modules is similar to publishing on a user-defined topic, but using a specific IoT Hub topic:
- For a device, telemetry is sent on topic: `devices/<device_name>/messages/events/` - For a module, telemetry is sent on topic: `devices/<device_name>/modules/<module_name>/messages/events/`
-Additionally, create a route such as `FROM /messages/* INTO $upstream` to send telemetry from the IoT Edge MQTT broker to the IoT hub. For more information about routing, see [Declare routes](module-composition.md#declare-routes).
+Additionally, route the message to its destination.
+
+As with all IoT Edge messages, you can create a route such as `FROM /messages/* INTO $upstream` to send telemetry from the IoT Edge MQTT broker to the IoT hub. For more information about routing, see [Declare routes](module-composition.md#declare-routes).
+
+Depending on the routing settings, the routing may define an input name, which will be attached to the topic when a message is getting forwarded. Also, Edge Hub (and the original sender) adds parameters to the message which is encoded in the topic structure. The following example shows a message routed with input name "TestInput". This message was sent by a module called "SenderModule", which name is also encoded in the topic:
+
+`devices/TestEdgeDevice/modules/TestModule/inputs/TestInput/%24.cdid=TestEdgeDevice&%24.cmid=SenderModule`
+
+Modules can also send messages on a specific output name. Output names help when messages from a module need to be routed to different destinations. When a module wants to send a message on a specific output, it sends the message as a regular telemetry message, except that it adds an additional system property to it. This system property is '$.on'. The '$' sign needs to be url encoded and it becomes %24 in the topic name. The following example shows a telemetry message sent with the output name 'alert':
+
+`devices/TestEdgeDevice/modules/TestModule/messages/events/%24.on=alert/`
+
+### Receive messages
+
+A telemetry message sent by a device or module can be routed to another module. If a module wants to receive M2M messages, first it needs to subscribe to the topic which delivers them. The format of the subscription is:
+
+`devices/{device_id}/modules/{module_id}/#`
### Get twin
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
If you want to update to the most recent version of IoT Edge, use the following
```bash sudo apt-get install aziot-edge defender-iot-micro-agent-edge ```
-It is recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](/azure/defender-for-iot/device-builders/overview).
+It is recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](../defender-for-iot/device-builders/overview.md).
<!-- end 1.2 --> :::moniker-end # [Linux on Windows](#tab/linuxonwindows)
-<!-- 1.1 -->
-
->[!IMPORTANT]
->If you are updating a device from the public preview version of IoT Edge for Linux on Windows to the generally available version, you need to uninstall and reinstall Azure IoT Edge.
->
->To find out if you're currently using the public preview version, navigate to **Settings** > **Apps** on your Windows device. Find **Azure IoT Edge** in the list of apps and features. If your listed version is 1.0.x, you are running the public preview version. Uninstall the app and then [Install and provision IoT Edge for Linux on Windows](how-to-provision-single-device-linux-on-windows-symmetric.md) again. If your listed version is 1.1.x, you are running the generally available version and can receive updates through Microsoft Update.
-
->[!IMPORTANT]
->If you are updating a Windows Server SKU device previous to 1.1.2110.03111 version of IoT Edge for Linux on Windows to the latest available version, you need to do a manual migration.
->
->Update [1.1.2110.0311](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2110.03111) introduced a change to the VM technology (HCS to VMMS) used for EFLOW Windows Server deployments. You can execute the VM migration with the following steps:
->
-> 1. Using Microsoft Update, download and install the 1.1.2110.03111 update (same as any other EFLOW update, no need for manual steps as long as EFLOW updates are turned on).
-> 2. Once EFLOW update is finished, open an elevated PowerShell session.
-> 3. Run the migration script:
->
-> ```powershell
-> Migrate-EflowVmFromHcsToVmms
-> ```
->
-> Note: Fresh EFLOW 1.1.2110.0311 msi installations on Windows Server SKUs will result in EFLOW deployments using VMMS technology, so no migration is needed.
-
-<!-- end 1.1 -->
-
-<!-- 1.2 -->
-
->[!IMPORTANT]
->This is a Public Preview version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), not intended for production use. A clean install may be required for production use once the final General Availability (GA) release is available.
->
->To find out if you're currently using the continuous release version, navigate to **Settings** > **Apps** on your Windows device. Find **Azure IoT Edge** in the list of apps and features. If your listed version is 1.2.x.y, you are running the continuous release version.
-<!-- end 1.2 -->
--
-With IoT Edge for Linux on Windows, IoT Edge runs in a Linux virtual machine hosted on a Windows device. This virtual machine is pre-installed with IoT Edge, and you cannot manually update or change the IoT Edge components. Instead, the virtual machine is managed with Microsoft Update to keep the components up to date automatically.
-
-To find the latest version of Azure IoT Edge for Linux on Windows, see [EFLOW releases](https://aka.ms/AzEFLOW-Releases).
-
-To receive IoT Edge for Linux on Windows updates, the Windows host should be configured to receive updates for other Microsoft products. You can turn this option with the following steps:
-
-1. Open **Settings** on the Windows host.
-
-1. Select **Updates & Security**.
-
-1. Select **Advanced options**.
-
-1. Toggle the *Receive updates for other Microsoft products when you update Windows* button to **On**.
+For information about IoT Edge for Linux on Windows updates, see [EFLOW Updates](./iot-edge-for-linux-on-windows-updates.md).
# [Windows](#tab/windows)
When you're ready, follow these steps to update IoT Edge on your devices:
```bash sudo apt-get install aziot-edge defender-iot-micro-agent-edge ```
-It is recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](/azure/defender-for-iot/device-builders/overview).
+It is recommended to install the micro agent with the Edge agent to enable security monitoring and hardening of your Edge devices. To learn more about Microsoft Defender for IoT, see [What is Microsoft Defender for IoT for device builders](../defender-for-iot/device-builders/overview.md).
1. Import your old config.yaml file into its new format, and apply the configuration info.
If you're installing IoT Edge, rather than upgrading an existing installation, u
View the latest [Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases).
-Stay up-to-date with recent updates and announcements in the [Internet of Things blog](https://azure.microsoft.com/blog/topics/internet-of-things/)
+Stay up-to-date with recent updates and announcements in the [Internet of Things blog](https://azure.microsoft.com/blog/topics/internet-of-things/)
iot-edge Iot Edge For Linux On Windows Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-benefits.md
+
+ Title: Why use Azure IoT Edge for Linux on Windows? | Microsoft Docs
+description: Benefits - Azure IoT Edge for Linux on Windows
+keywords:
++ Last updated : 04/15/2022+++++
+# Why use Azure IoT Edge for Linux on Windows?
++
+For organizations interested in running business logic and analytics on devices, Azure IoT Edge for Linux on Windows (EFLOW) enables the deployment of production Linux-based cloud-native workloads onto Windows devices. Connecting your devices to Microsoft Azure lets you quickly bring cloud intelligence to your business. At the same time, running workloads on devices allows you to respond quickly in instances with limited connectivity and reduce bandwidth costs.
+
+By bringing the best of Windows and Linux together, EFLOW enables new capabilities while leveraging existing Windows infrastructure and application investments. By running Linux IoT Edge modules on Windows devices, you can do more on a single device, reducing the overhead and cost of separate devices for different applications.
+
+EFLOW doesn't require extensive Linux knowledge and utilizes familiar Windows tools to manage your EFLOW device and workloads. Windows IoT provides trusted enterprise-grade security with established IT admin infrastructure. Lastly, the entire solution is maintained and kept up to date by Microsoft.
+
+## Easily Connect to Azure
+**IoT Edge Built-In**. [Tier 1 Azure IoT Edge support](support.md#operating-systems) is built in to EFLOW for a simplified deployment experience for your cloud workloads.
+
+**Curated Linux VM for Azure**. EFLOW consists of a specially curated Linux VM that runs alongside Windows IoT host OS. This Linux VM is based on [CBL-Mariner Linux](https://github.com/microsoft/CBL-Mariner), and is optimized for hosting IoT Edge workloads.
+
+## Familiar Windows Management
+**Flexible Scripting**. [PowerShell modules](reference-iot-edge-for-linux-on-windows-functions.md) provide the ability to fully script deployments.
+
+**WAC**. [Windows Admin Center EFLOW extension](how-to-provision-single-device-linux-on-windows-symmetric.md#developer-tools) (preview, EFLOW 1.1 only) provides a click-through deployment wizard and remote management experience.
+
+## Production Ready
+**Always Up-to-date**. EFLOW regularly releases feature and security improvements and is reliably updated using Microsoft Update. For more information on EFLOW updates, see [Update IoT Edge for Linux on Windows](./iot-edge-for-linux-on-windows-updates.md).
+
+**Fully Supported Environment.** In an EFLOW solution, the base operating system, the EFLOW Linux environment, and the container runtime are all maintained by MicrosoftΓÇömeaning there's a single source for all of the components. Each of the three components: [Windows IoT](/windows/iot/iot-enterprise/commercialization/licensing), EFLOW, and [Azure IoT Edge](version-history.md) have defined servicing mechanisms and support timelines.
+
+## Windows + Linux
+**Interoperability**. With EFLOW, the whole is greater than the sum of its parts. Combining a Windows application and Linux application on the same device unlocks new experiences and scenarios that otherwise wouldn't have been possible. Interoperability and hardware passthrough capabilities built into EFLOW including, [TPM passthrough](how-to-provision-devices-at-scale-linux-on-windows-tpm.md), [HW acceleration](gpu-acceleration.md), [Camera passthrough](https://github.com/Azure/iotedge-eflow/tree/main/samples/camera-over-rtsp), [Serial passthrough](https://github.com/Azure/iotedge-eflow/tree/main/samples/serial), and more, allow you to take advantage of both Linux and Windows environments.
+
+**IoT Edge Marketplace.** EFLOW presents an opportunity for Linux developers to target Windows devices, greatly increasing the potential install base. The Azure Marketplace offers a wide range of enterprise applications and solutions that are certified and optimized to run on Azure, including [Azure IoT Edge and EFLOW](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1&subcategories=iot-edge-modules).
iot-edge Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-networking.md
+
+ Title: Azure IoT Edge for Linux on Windows networking
+description: Overview of Azure IoT Edge for Linux on Windows networking
++
+# this is the PM responsible
++++ Last updated : 03/17/2022+++
+# IoT Edge for Linux on Windows networking
++
+ This article provides information about how to configure the networking between the Windows host OS and the IoT Edge for Linux on Windows (EFLOW) virtual machine. EFLOW uses a [CBL-Mariner](https://github.com/microsoft/CBL-Mariner) Linux virtual machine in order to run IoT Edge modules. For more information about EFLOW architecture, see [What is Azure IoT Edge for Linux on Windows](./iot-edge-for-linux-on-windows.md).
+
+## Networking
+To establish a communication channel between the Windows host OS and the EFLOW virtual machine, we use Hyper-V networking stack. For more information about Hyper-V networking, see [Hyper-V networking basics](/windows-server/virtualization/hyper-v/plan/plan-hyper-v-networking-in-windows-server#hyper-v-networking-basics). Basic networking in EFLOW is simple; it uses two parts, a virtual switch and a virtual network.
+
+The easiest way to establish basic networking on Windows client SKUs is by using the [**default switch**](/virtualization/community/team-blog/2017/20170726-hyper-v-virtual-machine-gallery-and-networking-improvements#details-about-the-default-switch) already created by the Hyper-V feature. During EFLOW deployment, if no specific virtual switch is specified using the `-vSwitchName` and `-vSwitchType` flags, the virtual machine will be created using the **default switch**.
+
+On Windows Server SKUs devices, networking is a bit more complicated as there's no **default switch** available. However, there's a comprehensive guide on [Azure IoT Edge for Linux on Windows virtual switch creation](./how-to-create-virtual-switch.md).
+
+To handle different types of networking, you can use different types of virtual switches and add multiple virtual network adapters.
+
+### Virtual switch choices
+EFLOW supports two types of Hyper-V virtual switches: **internal** and **external**. You'll choose which one of each you want when you create it previous to EFLOW deployment. You can use Hyper-V Manager or the Hyper-V module for Windows PowerShell to create and manage virtual switches. For more information about creating a virtual switch, see [Create a virtual switch for Hyper-V virtual machines](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines).
+
+You can make some changes to a virtual switch after you create it. For example, it's possible to change an existing switch to a different type, but doing that may affect the networking capabilities of EFLOW virtual machine connected to that switch. So, it's not recommended to do change the virtual switch configuration unless you made a mistake or need to test something.
+
+Depending if the EFLOW VM is deployed in a Windows client SKU or Windows Server SKU device, we support different types of switches, as shown in the following table.
+
+| Virtual switch type | Windows client SKUs | Windows Server SKUs |
+| - | -- | -- |
+| **External** | ![External on Client](./media/support/green-check.png) | ![External on Server](./media/support/green-check.png) |
+| **Internal** | - | ![Internal on Server](./media/support/green-check.png) |
+| **Default switch** | ![Default on Client](./media/support/green-check.png) | - |
+
+- **External virtual switch** - Connects to a wired, physical network by binding to a physical network adapter. It gives virtual machines access to a physical network to communicate with devices on an external network. In addition, it allows virtual machines on the same Hyper-V server to communicate with each other.
+- **Internal virtual switch** - Connects to a network that can be used only by the virtual machines running on the host that has the virtual switch, and between the host and the virtual machines.
+
+ >[!NOTE]
+ > The **default switch** is a particular internal virtual switch created by default once Hyper-V is enabled in Windows client SKUs. The virtual switch already has a DHCP Server for IP assignments, Internet Connection Sharing (ICS) enabled, and a NAT table. For EFLOW purposes, the Default Switch is a Virtual Internal Switch that can be used without further configuration.
+
+### IP address allocations
+To enable EFLOW VM network IP communications, the virtual machine must have an IP address assigned. This IP address can be configured by two different methods: **Static IP** or **DHCP**.
+
+Depending on the type of virtual switch used, EFLOW VM supports different IP allocations, as shown in the following table.
+
+| Virtual switch type | Static IP | DHCP |
+| - | -- | -- |
+| **External** | ![External with static IP](./media/support/green-check.png) | ![External with DHCP](./media/support/green-check.png) |
+| **Internal** | ![Internal with static IP](./media/support/green-check.png) | ![Internal with DHCP](./media/support/green-check.png) |
+| **Default switch** | - | ![Default with DHCP](./media/support/green-check.png) |
+
+- **Static IP** - This IP address is permanently assigned to the EFLOW VM during installation and doesn't change across EFLOW VM or Windows host reboots. Static IP addresses typically have two versions: IPv4 and IPv6; however, EFLOW only supports static IP for IPv4 addresses. On networks using static IP, each device on the network has its address with no overlap. During EFLOW installation, you must input the **EFLOW VM IP4 address**(`-ip4Address`), the **IP4 prefix length**(`-ip4PrefixLength`), and the **default gateway IP4 address**(`-ip4GatewayAddress`). All **three** parameters must be input for correct configuration.
+
+ For example, if you want to deploy the EFLOW VM using an *external virtual switch* named *ExternalEflow* with a static IP address *192.168.0.100*, default gateway *192.168.0.1*, and a prefix length of *24*, the following deploy command is needed
+
+ ```powershell
+ Deploy-Eflow -vSwitchName "ExternalEflow" -vswitchType "External" -ip4Address 192.168.0.100 -ip4GatewayAddress 192.168.0.1 -ip4PrefixLength 24
+ ```
+
+ >[!WARNING]
+ > When using static IP, the **three parameters** (`ip4Address`, `ip4GatewayAddres`, `ip4PrefixLength`) must be used. Also, if the IP address is invalid, being used by another device on thee netowrk, or the gateway address is incorrect, EFLOW installation could fail as the EFLOW VM can't get an IP address.
+
+- **DHCP** - Contrary to static IP, when using DHCP, the EFLOW virtual machine is assigned with a dynamic IP address; which is an address that may change. The network must have a DHCP server configured and operating to assign dynamic IP addresses. The DHCP server assigns a vacant IP address to the EFLOW VM and others connected to the network. Therefore, when deploying EFLOW using DHCP, no IP address, gateway address, or prefix length is needed, as the DHCP server provides all the information.
+
+ >[!WARNING]
+ > When deploying EFLOW using DHCP, a DHCP server must be present in the network connected to the EFLOW VM virtual switch. If no DHCP server is present, EFLOW installation with fail as the VM can't get an IP address.
++
+### DNS
+Domain Name System (DNS) translates human-readable domain names (for example, www.microsoft.com) to machine-readable IP addresses (for example, 192.0.2.44). The EFLOW virtual machine uses [*systemd*](https://systemd.io/) (system and service manager), so the DNS or name resolution services are provided to local applications and services via the [systemd-resolved](https://www.man7.org/linux/man-pages/man8/systemd-resolved.service.8.html) service.
+
+By default, the EFLOW VM DNS configuration file contains the local stub *127.0.0.53* as the only DNS server. This is redirected to the */etc/resolv.conf* file, which is used to add the name servers used by the system. The local stub is a DNS server that runs locally to resolve DNS queries. In some cases, these queries are forwarded to another DNS server in the network and then cached locally.
+
+It's possible to configure the EFLOW virtual machine to use a specific DNS server, or list of servers. To do so, you can use the `Set-EflowVmDnsServers` PowerShell cmdlet. For more information about DNS configuration, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md#set-eflowvmdnsservers).
+
+To check the DNS servers assigned to the EFLOW VM, from inside the EFLOW VM, use the command: `resolvectl status`. The command's output will show a list of the DNS servers configured for each interface. In particular, it's important to check the *eth0* interface status, which will be the default interface for the EFLOW VM communication. Also, make sure to check the IP addresses of the **Current DNS Server**s and **DNS Servers** fields of the list. If there's no IP address, or the IP address isn't a valid DNS server IP address, then the DNS service won't work.
+
+![Screenshot of console showing sample output from resolvectl command.](./media/iot-edge-for-linux-on-windows-networking/resolvctl-status.png)
+
+### Static MAC Address
+Hyper-V allows you to create virtual machines with a **static** or **dynamic** MAC address. During EFLOW virtual machine creation, the MAC address is randomly generated and stored locally to keep the same MAC address across virtual machine or Windows host reboots. To query the EFLOW virtual machine MAC address, you can use the following command.
+
+```powershell
+Get-EflowVmAddr
+```
++
+### Multiple Network Interface Cards (NICs)
+There are many network virtual appliances and scenarios that require multiple NICs. The EFLOW virtual machine supports attaching multiple NICs. With multiple NICs you can better manage your network traffic. You can also isolate traffic between the frontend NIC and backend NICs, or separating data plane traffic from the management plane communication.
+
+For example, there are numerous of industrial IoT scenarios that require connecting the EFLOW virtual machine to a demilitarized zone (DMZ), and to the offline network where all the OPC UA compliant devices are connected. This is just one of the multiple scenarios that can be supported by attaching multiple NICs to the EFLOW VM.
+
+For more information about multiple NICs, see [Multiple NICs support](https://github.com/Azure/iotedge-eflow/wiki/Multiple-NICs).
+
+>[!WARNING]
+>When using EFLOW multiple NICs feature, you may want to set up the different routes priorities. By default, EFLOW will create one default route per _ehtX_ interface assigned to the VM and assign a random priority. If all interfaces are connected to the internet, random priorities may not be a problem. However, if one of the NICs is connected to an offline network, you may want to prioritize the online NIC over the offline NIC to get the EFLOW VM connected to the internet. For more information about custom routing, see [EFLOW routing](https://github.com/Azure/iotedge-eflow/tree/main/samples/networking/routing).
+
+## Next steps
+
+Read more about [Azure IoT Edge for Linux on Windows Security](./iot-edge-for-linux-on-windows-security.md).
+
+Learn how to manage EFLOW networking [Networking configuration for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md)
iot-edge Iot Edge For Linux On Windows Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-security.md
+
+ Title: Azure IoT Edge for Linux on Windows security | Microsoft Docs
+description: Security framework - Azure IoT Edge for Linux on Windows
+keywords:
++ Last updated : 03/14/2022+++++
+# Security
++
+Azure IoT Edge for Linux on Windows benefits from all the security offerings from running on a Windows Client/Server host and ensures all the extra components keep the same security premises. This article provides information about the different security premises that are enabled by default, and some of the optional premises the user may enable.
+
+## Virtual machine security
+
+The IoT Edge for Linux (EFLOW) curated virtual machine is based on [Microsoft CBL-Mariner](https://github.com/microsoft/CBL-Mariner). CBL-Mariner is an internal Linux distribution for MicrosoftΓÇÖs cloud infrastructure and edge products and services. CBL-Mariner is designed to provide a consistent platform for these devices and services and enhances MicrosoftΓÇÖs ability to stay current on Linux updates. For more information, see [CBL-Mariner security](https://github.com/microsoft/CBL-Mariner/blob/1.0/SECURITY.md).
+
+<!-- 1.1 -->
+The EFLOW virtual machine is built on a three-point comprehensive security platform:
+1. Servicing updates
+1. Read-only root filesystem
+1. Firewall lockdown
+
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+The EFLOW virtual machine is built on a four-point comprehensive security platform:
+1. Servicing updates
+1. Read-only root filesystem
+1. Firewall lockdown
+1. DM-Verity
+<!-- end 1.2 -->
+
+### Servicing updates
+When security vulnerabilities arise, CBL-Mariner makes the latest security patches and fixes available for being serviced through ELOW monthly updates. The virtual machine has no package manager, so it's not possible to manually download and install RPM packages. All updates to the virtual machine are installed using EFLOW )
+
+### Read-only root filesystem
+The EFLOW virtual machine is made up of two main partitions *rootfs*, and *data*. The rootFS-A or rootFS-B partitions are interchangeable and one of the two is mounted as a read-only filesystem at `/`, which means that no changes are allowed on files stored inside this partition. On the other hand, the *data* partition mounted under `/var` is readable and writeable, allowing the user to modify the content inside the partition. The data stored on this partition isnΓÇÖt manipulated by the update process and hence won't be modified across updates.
+
+Because you may need write access to `/etc`, `/home`, `/root`, `/var` for specific use cases, write access for these directories is done by overlaying them onto our data partition specifically to the directory `/var/.eflow/overlays`. The end result of this is that users can write anything to the previous mentioned directories. For more information about overlays, see [*overlayfs*](https://docs.kernel.org/filesystems/overlayfs.html).
+
+<!-- 1.1 -->
+
+![EFLOW 1.1LTS partition layout](./media/iot-edge-for-linux-on-windows-security/eflow-lts-partition-layout.png)
+
+| Partition | Size | Description |
+| |- | |
+| Boot | 192 MB | Contains the bootloader |
+| RootFS A | 2 GB | One of two active/passive partitions holding the root file system |
+| RootFS B | 2 GB | One of two active/passive partitions holding the root file system |
+| AB Update | 2 GB | Holds the update files. Ensure there's always enough space in the VM for updates |
+| Data | 2 GB to 2 TB | Stateful partition for storing persistent data across updates. Expandable according to the deployment configuration |
+
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+
+![EFLOW CR partition layout](./media/iot-edge-for-linux-on-windows-security/eflow-cr-partition-layout.png)
+
+| Partition | Size | Description |
+| |- | |
+| BootEFIA | 8 MB | Firmware partition A for future GRUBless boot |
+| BootEFIB | 8 MB | Firmware partition B for future GRUBless boot |
+| BootA | 192 MB | Contains the bootloader for A partition |
+| BootB | 192 MB | Contains the bootloader for B partition |
+| RootFS A | 4 GB | One of two active/passive partitions holding the root file system |
+| RootFS B | 4 GB | One of two active/passive partitions holding the root file system |
+| Unused | 4 GB | This partition is reserved for future use |
+| Log | 1 GB or 6 GB | Logs specific partition mounted under /logs |
+| Data | 2 GB to 2 TB | Stateful partition for storing persistent data across updates. Expandable according to the deployment configuration |
+
+<!-- end 1.2 -->
+
+>[!NOTE]
+>The partition layout represents the logical disk size and does not indicate the physical space the virtual machine will occupy on the host OS disk.ΓÇï
+
+### Firewall
+
+By default, the EFLOW virtual machine uses [*iptables*](https://git.netfilter.org/) utility for firewall configurations. *Iptables* is used to set up, maintain, and inspect the tables of IP packet filter rules in the Linux kernel. The default implementation only allows incoming traffic on port 22 (SSH service) and blocks the traffic otherwise. You can check the *iptables* configuration with the following steps:
+
+1. Open an elevated PowerShell session
+1. Connect to the EFLOW virtual machine
+ ```powershell
+ Connect-EflowVm
+ ```
+1. List all the *iptables* rules
+ ```bash
+ sudo iptables -L
+ ```
+
+ ![EFLOW iptables default](./media/iot-edge-for-linux-on-windows-security/default-iptables-output.png)
+
+<!-- 1.2 -->
+### Verified boot
+
+The EFLOW virtual machine supports **Verified boot** through the included *device-mapper-verity (dm-verity)* kernel feature, which provides transparent integrity checking of block devices. *dm-verity* helps prevent persistent rootkits that can hold onto root privileges and compromise devices. This feature assures the virtual machine base fotware image it's the same and it wasn't altered. The virtual machine uses the *dm-verity* feature to check specific block device, the underlying storage layer of the file system, and determine if it matches its expected configuration.
+
+By default, this feature is enabled in the virtual machine, and can't be turned off. For more information, see [dm-verity](https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/verity.html#).
+
+<!-- end 1.2 -->
+
+## Trusted platform module (TPM)
+[Trusted platform module (TPM)](/windows/security/information-protection/tpm/trusted-platform-module-top-node) technology is designed to provide hardware-based, security-related functions. A TPM chip is a secure crypto-processor that is designed to carry out cryptographic operations. The chip includes multiple physical security mechanisms to make it tamper resistant, and malicious software is unable to tamper with the security functions of the TPM.
+
+The EFLOW virtual machine doesn't support vTPM. However the user can enable/disable the TPM passthrough feature, that allows the EFLOW virtual machine to use the Windows host OS TPM. This enables two main scenarios:
+* Use TPM technology for IoT Edge device provisioning using Device Provision Service (DPS). For more information, see [Create and provision an IoT Edge for Linux on Windows device at scale by using a TPM](./how-to-provision-devices-at-scale-linux-on-windows-tpm.md).
+* Read-only access to cryptographic keys stored inside the TPM. For more information, see [Set-EflowVmFeature to enable TPM passthrough](./reference-iot-edge-for-linux-on-windows-functions.md#set-eflowvmfeature).
++
+## Secure host & virtual machine communication
+EFLOW provides multiple ways to interact with the virtual machine by exposing a rich PowerShell module implementation. For more information, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md#set-eflowvmfeature). This module requires an elevated session to run, and it's signed using a Microsoft Corporation certificate.
+
+All communications between the Windows host operating system and the EFLOW virtual machine required by the PowerShell cmdlets are done using an SSH channel. By default, the virtual machine SSH service won't allow authentication via username and password, and it's limited to certificate authentication. The certificate is created during EFLOW deployment process, and is unique for each EFLOW installation. Furthermore, to prevent SSH brute force attacks, the virtual machine will block an IP address if it attempts more than three connections per minute to SSH service.
+
+<!-- 1.2 -->
+In the EFLOW Continuous Release (CR) version, we introduced a change in the transport channel used to establish the SSH connection. Originally, SSH service runs on TCP port 22, which can be accessed by all external devices in the same network using a TCP socket to that specific port. For security reasons, EFLOW CR runs the SSH service over Hyper-V sockets instead of normal TCP sockets. All communication over Hyper-V sockets runs between the Windows host OS and the EFLOW virtual machine, without using networking. This limits the access of the SSH service, restricting connections to only the Windows host OS. For more information, see [Hyper-V sockets](/virtualization/hyper-v-on-windows/user-guide/make-integration-service).
+
+<!-- end 1.2 -->
++
+## Next steps
+
+Read more about [Windows IoT security premises](/windows/iot/iot-enterprise/os-features/security)
+
+Stay up-to-date with the latest [IoT Edge for Linux on Windows updates](./iot-edge-for-linux-on-windows.md)
iot-edge Iot Edge For Linux On Windows Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-support.md
+
+ Title: Supported operating systems, container engines - Azure IoT Edge for Linux on Windows
+description: Learn which operating systems can run Azure IoT Edge for Linux on Windows
++ Last updated : 03/15/2022+++++
+# Azure IoT Edge for Linux on Windows supported systems
++
+This article provides details about which systems are supported by IoT Edge for Linux on Windows, whether generally available or in preview.
+
+## Get support
+
+If you experience problems while using Azure IoT Edge for Linux on Windows, there are several ways to seek support. Try one of the following channels for support:
+
+**Reporting bugs** ΓÇô Bugs can be reported on the [issues page](https://github.com/azure/iotedge-eflow/issues) of the project. Bugs related to Azure IoT Edge can be reported on the [IoT Edge issues page](https://github.com/azure/iotedge/issues). Fixes rapidly make their way from the projects in to product updates.
+
+**Microsoft Customer Support team** ΓÇô Users who have a [support plan](https://azure.microsoft.com/support/plans/) can engage the Microsoft Customer Support team by creating a support ticket directly from the [Azure portal](https://portal.azure.com).
++
+## Container engines
+
+By default, Azure IoT Edge for Linux on Windows includes IoT Edge runtime as part of the virtual machine composition. The IoT Edge runtime provides moby-engine as the container engine, to run modules implemented as containers. This container engine is based on the Moby open-source project. For more information about container engines, support, and IoT Edge, see [IoT Edge Platform support](./support.md).
++
+## Operating systems
+
+IoT Edge for Linux on Windows uses IoT Edge in a Linux virtual machine running on a Windows host. In this way, you can run Linux modules on a Windows device. Azure IoT Edge for Linux on Windows runs on the following Windows SKUs:
+
+* **Windows Client**
+ * Pro, Enterprise, IoT Enterprise SKUs
+ * Windows 10 - Minimum build 17763 with all current cumulative updates installed
+ * Windows 11
+* **Windows Server**
+ * Windows Server 2019 - Minimum build 17763 with all current cumulative updates installed
+ * Windows Server 2022
++
+## Platform support
+Azure IoT Edge for Linux on Windows supports the following architectures:
+
+| Version | AMD64 | ARM64 |
+| - | -- | -- |
+| EFLOW 1.1 LTS | ![AMD64](./media/support/green-check.png) | |
+| EFLOW Continuous Release (CR) ([Public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)) | ![AMD64](./media/support/green-check.png) | ![ARM64](./media/support/green-check.png) |
++
+## Virtual machines
+
+Azure IoT Edge for Linux on Windows can run in Windows virtual machines. Using a virtual machine as an IoT Edge device is common when customers want to augment existing infrastructure with edge intelligence. In order to run the EFLOW virtual machine inside a Windows VM, the host VM must support nested virtualization. There are two forms of nested virtualization compatible with Azure IoT Edge for Linux on Windows. Users can choose to deploy through a local VM or Azure VM. For more information, see [EFLOW Nested virtualization](./nested-virtualization.md).
+
+### VMware virtual machine
+
+Azure IoT Edge for Linux on Windows supports running inside a Windows virtual machine running on top of [VMware ESXi](https://www.vmware.com/products/esxi-and-esx.html) product family. Specific networking and virtualization configurations are needed to support this scenario. For more information about VMware configuration, see [EFLOW Nested virtualization](./nested-virtualization.md).
++
+## Releases
+
+IoT Edge for Linux on Windows release assets and release notes are available on the [iotedge-eflow releases](https://github.com/Azure/iotedge-eflow/releases) page. This section reflects information from those release notes to help you visualize the components of each version more easily.
+
+The following table lists the components included in each release. Each release train is independent, and we don't guarantee backwards compatibility and migration between versions. For more information about IoT Edge version, see [IoT Edge platform support](./support.md).
+
+| Release | IoT Edge | CBL-Mariner | Defender for IoT |
+| - | -- | -- | - |
+| **1.1 LTS** | 1.1 | 1.0 | - |
+| **Continuous Release** | 1.2 | 1.0 | 3.12.3 |
++
+## Minimum system requirements
+
+Azure IoT Edge for Linux on Windows runs great on small edge devices to server grade hardware. Choosing the right hardware for your scenario depends on the workloads that you want to run.
+
+A Windows device with the following minimum requirements:
+
+* Hardware requirements
+ * Minimum Free Memory: 1 GB
+ * Minimum Free Disk Space: 10 GB
+
+* Virtualization support
+ * On Windows 10, enable Hyper-V. For more information, see [Install Hyper-V on Windows 10](/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v).
+ * On Windows Server, install the Hyper-V role and create a default network switch. For more information, see [Nested virtualization for Azure IoT Edge for Linux on Windows](./nested-virtualization.md).
+ * On a virtual machine, configure nested virtualization. For more information, see [nested virtualization](./nested-virtualization.md).
iot-edge Iot Edge For Linux On Windows Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-updates.md
+
+ Title: Azure IoT Edge for Linux on Windows updates
+description: Overview of Azure IoT Edge for Linux on Windows updates
++
+# this is the PM responsible
++++ Last updated : 03/14/2022+++
+# Update IoT Edge for Linux on Windows
++
+As the IoT Edge for Linux on Windows (EFLOW) application releases new versions, you'll want to update your IoT Edge devices for the latest features and security improvements. This article provides information about how to update your IoT Edge for Linux on Windows devices when a new version is available.
+
+With IoT Edge for Linux on Windows, IoT Edge runs in a Linux virtual machine hosted on a Windows device. This virtual machine is pre-installed with IoT Edge, and has no package manager, so you canΓÇÖt manually update or change any of the VM components. Instead, the virtual machine is managed with Microsoft Update to keep the components up to date automatically.
+
+The EFLOW virtual machine is designed to be reliably updated via Microsoft Update. The virtual machine operating system has an A/B update partition scheme to utilize a subset of those to make each update safe and enable a roll-back to a previous version if anything goes wrong during the update process.
+
+Each update consists of two main components that may get updated to latest versions. The first one is the EFLOW virtual machine and the internal components. For more information about EFLOW, see [Azure IoT Edge for Linux on Windows composition](./iot-edge-for-linux-on-windows.md). This also includes the virtual machine base operating system. The EFLOW virtual machine is based on [Microsoft CBL-Mariner](https://github.com/microsoft/CBL-Mariner) and each update provides performance and security fixes to keep the OS with the latest CVE patches. As part of the EFLOW Release notes, the version indicates the CBL-Mariner version used, and users can check the [CBL-Mariner Releases](https://github.com/microsoft/CBL-Mariner/releases) to get the list of CVEs fixed for each version.
+
+The second component is the group of Windows runtime components needed to run and interop with the EFLOW virtual machine. The virtual machine lifecycle and interop is managed through different components: WSSDAgent, EFLOWProxy service and the PowerShell module.
+
+EFLOW updates are sequential and you'll require to update to every version in order, which means that in order to get to the latest version, you'll have to either do a fresh installation using the latest available version, or apply all the previous servicing updates up to the desired version.
+
+To find the latest version of Azure IoT Edge for Linux on Windows, see [EFLOW releases](https://aka.ms/AzEFLOW-Releases).
+
+<!-- 1.2 -->
+
+>[!IMPORTANT]
+>This is a Public Preview version of [Azure IoT Edge for Linux on Windows continuous release (EFLOW CR)](./version-history.md), not intended for production use. A clean install may be required for production use once the final General Availability (GA) release is available.
+>
+>To find out if you're currently using the continuous release version, navigate to **Settings** > **Apps** on your Windows device. Find **Azure IoT Edge** in the list of apps and features. If your listed version is 1.2.x.y, you are running the continuous release version.
+<!-- end 1.2 -->
+
+## Update using Microsoft Update
+
+To receive IoT Edge for Linux on Windows updates, the Windows host should be configured to receive updates for other Microsoft products. By default, Microsoft Updates will be turned on during EFLOW installation. If custom configuration is needed after EFLOW installation, you can turn this option On/Off with the following steps:
+
+1. Open **Settings** on the Windows host.
+
+1. Select **Updates & Security**.
+
+1. Select **Advanced options**.
+
+1. Toggle the *Receive updates for other Microsoft products when you update Windows* button to **On**.
++
+## Update using Windows Server Update Services (WSUS)
+
+On premises updates using WSUS is supported for IoT Edge for Linux on Windows updates. For more information about WSUS, see [Device Management Overview - WSUS](/windows/iot/iot-enterprise/device-management/device-management-overview#windows-server-update-services-wsus).
++
+## Offline manual update
+
+In some scenarios with restricted or limited internet connectivity, you may want to manually apply EFLOW updates offline. This is possible using Microsoft Update offline mechanisms. You can manually download and install an IoT Edge for Linux on Windows updates with the following steps:
+
+<!-- 1.1 -->
+1. Check the current EFLOW installed version. Open **Settings**, select **Apps** -> **Apps & features** search for *Azure IoT Edge LTS*.
+
+1. Search and download the required update from [EFLOW - Microsoft Update catalog](https://www.catalog.update.microsoft.com/Search.aspx?q=Azure%20IoT%20Edge%20for%20Linux%20on%20Windows).
+
+1. Extract *AzureIoTEdge.msi* from the downloaded *.cab* file.
+
+1. Install the extracted *AzureIoTEdge.msi*.
+<!-- end 1.1 -->
+
+<!-- 1.2 -->
+1. Check the current EFLOW installed version. Open **Settings**, select **Apps** -> **Apps & features** search for *Azure IoT Edge*.
+
+1. Search and download the required update from [EFLOW - Microsoft Update catalog](https://www.catalog.update.microsoft.com/Search.aspx?q=Azure%20IoT%20Edge%20for%20Linux%20on%20Windows).
+
+1. Extract *AzureIoTEdge.msi* from the downloaded *.cab* file.
+
+1. Install the extracted *AzureIoTEdge.msi*.
+<!-- end 1.2 -->
++
+## Managing Microsoft Updates
+
+As explained before, IoT Edges for Linux on Windows updates are serviced using Microsoft Update channel, so turn on/off EFLOW updates, you'll have to manage Microsoft Updates. Listed below are some of the ways to automate turning on/off Microsoft updates. For more information about managing OS updates, see [OS Updates](/windows/iot/iot-enterprise/os-features/updates#completely-turn-off-windows-updates).
+
+1. **CSP Policies** - By using the **Update/AllowMUUpdateService** CSP Policy - For more information about Microsoft Updates CSP policy, see [Policy CSP - MU Update](/windows/client-management/mdm/policy-csp-update#update-allowmuupdateservice).
+
+1. **Manually manage Microsoft Updates** - For more information about how to Opt-In to Microsoft Updates, see [Opt-In to Microsoft Update](/windows/win32/wua_sdk/opt-in-to-microsoft-update):
+
+<!-- 1.1 -->
+## Special case: Migration from HCS to VMMS on Server SKUs
+
+If you're updating a Windows Server SKU device previous to [1.1.2110.0311](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2110.03111) version of IoT Edge for Linux on Windows to the latest available version, you need to do a manual migration.
+
+Update [1.1.2110.0311](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2110.03111) introduced a change to the VM technology (HCS to VMMS) used for EFLOW Windows Server deployments. You can execute the VM migration with the following steps:
+
+ 1. Using Microsoft Update, download and install the [1.1.2110.0311](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2110.03111) update (same as any other EFLOW update, no need for manual steps as long as EFLOW updates are turned on).
+ 2. Once EFLOW update is finished, open an elevated PowerShell session.
+ 3. Run the migration script:
+
+ ```powershell
+ Migrate-EflowVmFromHcsToVmms
+ ```
+
+>[!NOTE]
+>Fresh EFLOW 1.1.2110.0311 MSI installations on Windows Server SKUs will result in EFLOW deployments using VMMS technology, so no migration is needed.
+<!-- end 1.1 -->
+
+## Migrations between EFLOW 1.1 LTS and EFLOW CR
+
+IoT Edge for Linux on Windows doesn't support migrations between the different release trains. If you want to move from the 1.1LTS version to the Continuous Release (CR) version or viceversa, you'll have to uninstall the current version and install the new desired version.
++
+## Next steps
+
+View the latest [Azure IoT Edge for Linux on Windows releases](https://github.com/Azure/iotedge-eflow/releases).
+
+Stay up-to-date with recent updates and announcements in the [Internet of Things blog](https://azure.microsoft.com/blog/topics/internet-of-things/)
iot-edge Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows.md
Azure IoT Edge for Linux on Windows uses the following components to enable Linu
* **Windows Admin Center**: An Azure IoT Edge extension for Windows Admin Center facilitates installation, configuration, and diagnostics of Azure IoT Edge on the Linux virtual machine. Windows Admin Center can deploy Azure IoT Edge for Linux on Windows on the local device, or can connect to target devices and manage them remotely.
-* **Microsoft Update**: Integration with Microsoft Update keeps the Windows runtime components, the CBL-Mariner Linux VM, and Azure IoT Edge up to date.
+* **Microsoft Update**: Integration with Microsoft Update keeps the Windows runtime components, the CBL-Mariner Linux VM, and Azure IoT Edge up to date. For more information about IoT Edge for Linux on Windows updates, see [Update IoT Edge for Linux on Windows](./iot-edge-for-linux-on-windows-updates.md).
+ ![Windows and the Linux VM run in parallel, while the Windows Admin Center controls both components](./media/iot-edge-for-linux-on-windows/architecture-and-communication.png) :::moniker-end
Azure IoT Edge for Linux on Windows uses the following components to enable Linu
:::moniker range=">=iotedge-2020-11" * **A Linux virtual machine running Azure IoT Edge**: A Linux virtual machine, based on Microsoft's first party [CBL-Mariner](https://github.com/microsoft/CBL-Mariner) operating system, is built with the Azure IoT Edge runtime and validated as a tier 1 supported environment for Azure IoT Edge workloads.
-* **Microsoft Update**: Integration with Microsoft Update keeps the Windows runtime components, the CBL-Mariner Linux VM, and Azure IoT Edge up to date.
+* **Microsoft Update**: Integration with Microsoft Update keeps the Windows runtime components, the CBL-Mariner Linux VM, and Azure IoT Edge up to date. For more information about IoT Edge for Linux on Windows updates, see [Update IoT Edge for Linux on Windows](./iot-edge-for-linux-on-windows-updates.md).
> [!NOTE] > Azure IoT Edge for Linux on Windows extension for Windows Amin Center (WAC) is not supported with this EFLOW version.
A Windows device with the following minimum requirements:
* Minimum Free Memory: 1 GB * Minimum Free Disk Space: 10 GB
+For more information about IoT Edge for Linux on Windows requirements, see [Azure IoT Edge for Linux on Windows supported systems](./iot-edge-for-linux-on-windows-support.md).
+ ## Platform support
-Azure IoT Edge for Linux on Windows supports the following architectures:
-| Version | AMD64 | ARM64 |
-| - | -- | -- |
-| EFLOW 1.1 LTS | ![AMD64](./media/support/green-check.png) | |
-| EFLOW CR (public preview) | ![AMD64](./media/support/green-check.png) | ![ARM64](./media/support/green-check.png) |
+Azure IoT Edge for Linux on Windows supports both AMD64 and ARM64 architectures. For more information about EFLOW platform support, see [Azure IoT Edge for Linux on Windows supported systems](./iot-edge-for-linux-on-windows-support.md)
+ ## Samples
iot-edge Nested Virtualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/nested-virtualization.md
[!INCLUDE [iot-edge-version-all-supported](../../includes/iot-edge-version-all-supported.md)]
-There are two forms of nested virtualization compatible with Azure IoT Edge for Linux on Windows. Users can choose to deploy through a local VM or Azure VM. This article will provide users clarity on which option is best for their scenario and provide insight into configuration requirements.
+There are three forms of nested virtualization compatible with Azure IoT Edge for Linux on Windows. Users can choose to deploy through a local virtual machine (using Hyper-V hypervisor), VMware Windows virtual machine or Azure Virtual Machine. This article will provide users clarity on which option is best for their scenario and provide insight into configuration requirements.
> [!NOTE]
->
> Ensure to enable one [networking option](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization#networking-options) for nested virtualization. Failing to do so will result in EFLOW installation errors. ## Deployment on local VM This is the baseline approach for any Windows VM that hosts Azure IoT Edge for Linux on Windows. For this case, nested virtualization needs to be enabled before starting the deployment. Read [Run Hyper-V in a Virtual Machine with Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization) for more information on how to configure this scenario.
-If you are using Windows Server, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server).
+If you're using Windows Server, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server).
+
+## Deployment on Windows VM on VMware
+
+VMware ESXi [6.7](https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-67-installation-setup-guide.pdf) and [7.0](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html) versions support nested virtualization needed for hosting Azure IoT Edge for Linux on Windows on top of a Windows virtual machine.
+
+To set up an Azure IoT Edge for Linux on Windows on a VMware ESXi Windows Server virtual machine, use the following steps:
+
+1. Create a Windows virtual machine on the VMware ESXi host. For more information about VMware VM deployment, see [VMware - Deploying Virtual Machines](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-39D19B2B-A11C-42AE-AC80-DDA8682AB42C.html).
+
+1. Turn off the virtual machine created in previous step.
+
+1. Select the Windows virtual machine and then **Edit settings**.
+
+1. Search for _Hardware virtualization_ and turn on _Expose hardware assisted virtualization to the guest OS_.
+
+1. Select **Save** and start the virtual machine.
+
+1. Install Hyper-V hypervisor. If you're using Windows client, make sure you [Install Hyper-V on Windows 10](/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v). If you're using Windows Server, make sure you [install the Hyper-V role](/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server).
+
+> [!NOTE]
+> For VMware Windows virtual machines, if you plan to use an **external virtual switch** for the EFLOW virtual machine networking, make sure you enable _Promiscious mode_. For more information, see [Configuring promiscuous mode on a virtual switch or portgroup](https://kb.vmware.com/s/article/1004099). Failing to do so will result in EFLOW installation errors.
+ ## Deployment on Azure VMs
-Azure IoT Edge for Linux on Windows is not compatible on an Azure VM running the Server SKU unless a script is executed that brings up a default switch. For more information on how to bring up a default switch, see [Create virtual switch for Linux on Windows](how-to-create-virtual-switch.md).
+Azure IoT Edge for Linux on Windows isn't compatible on an Azure VM running the Server SKU unless a script is executed that brings up a default switch. For more information on how to bring up a default switch, see [Create virtual switch for Linux on Windows](how-to-create-virtual-switch.md).
> [!NOTE]
->
-> Any Azure VMs that is supposed to host EFLOW must be a VM that [supports nested virtualization](../virtual-machines/acu.md)
+> Any Azure VMs that is supposed to host EFLOW must be a VM that [supports nested virtualization](../virtual-machines/acu.md). Also, Azure VMs do not support using an **external virtual switch**.
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
Azure IoT Edge is a product built from the open-source IoT Edge project hosted o
The IoT Edge documentation on this site is available for two different versions of the product, so that you can choose the content that applies to your IoT Edge environment. Currently, the two supported versions are:
-* **IoT Edge 1.2** contains content for new features and capabilities that are in the latest stable release. This version of the documentation also contains content for the IoT Edge for Linux on Windows (EFLOW) continuous release version, which is based on IoT Edge 1.2 and contains the latest features and capabilities. IoT Edge 1.2 is now bundled with the [Microsoft Defender for IoT micro-agent for Edge](/azure/defender-for-iot/device-builders/overview).
+* **IoT Edge 1.2** contains content for new features and capabilities that are in the latest stable release. This version of the documentation also contains content for the IoT Edge for Linux on Windows (EFLOW) continuous release version, which is based on IoT Edge 1.2 and contains the latest features and capabilities. IoT Edge 1.2 is now bundled with the [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).
* **IoT Edge 1.1 (LTS)** is the first long-term support (LTS) version of IoT Edge. The documentation for this version covers all features and capabilities from all previous versions through 1.1. This version of the documentation also contains content for the IoT Edge for Linux on Windows long-term support version, which is based on IoT Edge 1.1 LTS. * This documentation version will be stable through the supported lifetime of version 1.1, and won't reflect new features released in later versions. IoT Edge 1.1 LTS will be supported until December 3, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
This table provides recent version history for IoT Edge package releases, and hi
| Release notes and assets | Type | Date | Highlights | | | - | - | - |
-| [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to 1.2](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-12).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](/azure/defender-for-iot/device-builders/overview).
+| [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to 1.2](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-12).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).
| [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | [Long-term support plan and supported systems updates](support.md) | | [1.0.10](https://github.com/Azure/azure-iotedge/releases/tag/1.0.10) | Stable | October 2020 | [UploadSupportBundle direct method](how-to-retrieve-iot-edge-logs.md#upload-support-bundle-diagnostics)<br>[Upload runtime metrics](how-to-access-built-in-metrics.md)<br>[Route priority and time-to-live](module-composition.md#priority-and-time-to-live)<br>[Module startup order](module-composition.md#configure-modules)<br>[X.509 manual provisioning](how-to-provision-single-device-linux-x509.md) | | [1.0.9](https://github.com/Azure/azure-iotedge/releases/tag/1.0.9) | Stable | March 2020 | X.509 auto-provisioning with DPS<br>[RestartModule direct method](how-to-edgeagent-direct-method.md#restart-module)<br>[support-bundle command](troubleshoot.md#gather-debug-information-with-support-bundle-command) |
This table provides recent version history for IoT Edge package releases, and hi
* [View all Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases)
-* [Make or review feature requests in the feedback forum](https://feedback.azure.com/d365community/forum/0e2fff5d-f524-ec11-b6e6-000d3a4f0da0)
+* [Make or review feature requests in the feedback forum](https://feedback.azure.com/d365community/forum/0e2fff5d-f524-ec11-b6e6-000d3a4f0da0)
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
In order to ensure a client/IoT Hub connection stays alive, both the service and
|Node.js | 180 seconds | No | |Java | 230 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-java/blob/main/device/iot-device-client/src/main/java/com/microsoft/azure/sdk/iot/device/ClientOptions.java#L64) | |C | 240 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/Iothub_sdk_options.md#mqtt-transport) |
-|C# | 300 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/iothub/device/src/Transport/Mqtt/MqttTransportSettings.cs#L89) |
+|C# | 300 seconds* | [Yes](/dotnet/api/microsoft.azure.devices.client.transport.mqtt.mqtttransportsettings.keepaliveinseconds) |
|Python | 60 seconds | No |
+> *The C# SDK defines the default value of the MQTT KeepAliveInSeconds property as 300 seconds but in reality the SDK sends a ping request 4 times per keep-alive duration set. This means the SDK sends a keep-alive ping every 75 seconds.
+ Following the [MQTT spec](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718081), IoT Hub's keep-alive ping interval is 1.5 times the client keep-alive value. However, IoT Hub limits the maximum server-side timeout to 29.45 minutes (1767 seconds) because all Azure services are bound to the Azure load balancer TCP idle timeout, which is 29.45 minutes. For example, a device using the Java SDK sends the keep-alive ping, then loses network connectivity. 230 seconds later, the device misses the keep-alive ping because it's offline. However, IoT Hub doesn't close the connection immediately - it waits another `(230 * 1.5) - 230 = 115` seconds before disconnecting the device with the error [404104 DeviceConnectionClosedRemotely](iot-hub-troubleshoot-error-404104-deviceconnectionclosedremotely.md).
Connecting to IoT Hub over MQTT using a module identity is similar to the device
* If authenticating with username and password, set the username to `<hubname>.azure-devices.net/{device_id}/{module_id}/?api-version=2021-04-12` and use the SAS token associated with the module identity as your password.
-* Use `devices/{device_id}/modules/{module_id}/messages/events/` as topic for publishing telemetry.
+* Use `devices/{device_id}/modules/{module_id}/messages/events/` as a topic for publishing telemetry.
* Use `devices/{device_id}/modules/{module_id}/messages/events/` as WILL topic.
+* Use `devices/{deviceName}/modules/{moduleName}/#` as a topic for receiving messages.
+ * The twin GET and PATCH topics are identical for modules and devices. * The twin status topic is identical for modules and devices.
+For more information about using MQTT with modules, see [Publish and subscribe with IoT Edge](../iot-edge/how-to-publish-subscribe.md) and learn more about the [Edge Hub MQTT endpoint](https://github.com/Azure/iotedge/blob/main/doc/edgehub-api.md#edge-hub-mqtt-endpoint).
+ ## TLS/SSL configuration To use the MQTT protocol directly, your client *must* connect over TLS/SSL. Attempts to skip this step fail with connection errors.
key-vault Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/built-in-roles.md
Managed HSM local RBAC has several built-in roles. You can assign these roles to
|**Key management**| |/keys/read/action|||<center>X</center>||<center>X</center>||<center>X</center>| |/keys/write/action|||<center>X</center>||||
+|/keys/rotate/action|||<center>X</center>||||
|/keys/create|||<center>X</center>|||| |/keys/delete|||<center>X</center>|||| |/keys/deletedKeys/read/action||<center>X</center>|||||
key-vault Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/key-rotation.md
+
+ Title: Configure key auto-rotation in Azure Key Vault Managed HSM
+description: Use this guide to learn how to configure automated the rotation of a key in Azure Key Vault Managed HSM
++
+tags: 'rotation'
+++ Last updated : 3/18/2021++
+# Configure key auto-rotation in Azure Managed HSM (preview)
+
+## Overview
+
+Automated key rotation in Managed HSM allows users to configure Managed HSM to automatically generate a new key version at a specified frequency. You can set a rotation policy to configure rotation for each individual
+key and optionally rotate keys on demand. Our recommendation is to rotate encryption keys at least every two years to meet cryptographic best practices. For additional guidance and recommendations, see [NIST SP 800-57 Part 1](https://csrc.nist.gov/publications/detail/sp/800-57-part-1/rev-5/final).
+
+This feature enables end-to-end zero-touch rotation for encryption at rest for Azure services with customer-managed keys (CMK) stored in Azure Managed HSM. Please refer to specific Azure service documentation to see if the service covers end-to-end rotation.
+
+## Pricing
+
+Managed HSM key rotation is offered at no extra cost. For more information on Managed HSM pricing, see [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/)
+
+> [!WARNING]
+> Managed HSM has a limit of 100 versions per key. Key versions created as part of automatic or manual rotation count toward this limit.
+
+## Permissions required
+
+Rotating a key or setting a key rotation policy requires specific key management permissions. You can assign the "Managed HSM Crypto User" role to get sufficient permissions to manage rotation policy and on-demand rotation.
+
+For more information on how to configure Local RBAC permissions on Managed HSM, see:
+[Managed HSM role management](role-management.md)
+
+> [!NOTE]
+> Setting a rotation policy requires the "Key Write" permission. Rotating a key on-demand requires "Rotation" permissions. Both are included with the "Managed HSM Crypto User" built-in role
+
+## Key rotation policy
+
+The key rotation policy allows users to configure rotation intervals and set the expiration interval for rotated keys. It must be set before keys can be rotated on-demand.
+
+> [!NOTE]
+> Managed HSM does not support Event Grid Notifications
+
+Key rotation policy settings:
+
+- Expiry time: key expiration interval (minimum 28 days). It is used to set expiration date on a newly rotated key (e.g. after rotation, the new key is set to expire in 30 days).
+- Rotation types:
+ - Automatically renew at a given time after creation
+ - Automatically renew at a given time before expiry. 'Expiration Date' must be set on the key for this event to fire.
+
+> [!WARNING]
+> An *automatic* rotation policy cannot mandate that new key versions be created more frequently than once every 28 days. For creation-based rotation policies, this means the minimum value for `timeAfterCreate` is `P28D`. For expiration-based rotation policies, the maximum value for `timeBeforeExpiry` depends on the `expiryTime`. For example, if `expiryTime` is `P56D`, `timeBeforeExpiry` can be at most `P28D`.
++
+## Configure a key rotation policy
+
+### Azure CLI
+
+Write a key rotation policy and save it to a file. Use ISO8601 Duration formats to specify time intervals. Some example policies are provided in the next section. Use the following command to apply the policy to a key.
+
+```azurecli
+az keyvault key rotation-policy update --hsm-name <hsm-name> --name <key-name> --value </path/to/policy.json>
+```
+#### Example policies
+
+Rotate the key 18 months after creation and set the new key to expire after two years.
+
+```json
+{
+ "lifetimeActions": [
+ {
+ "trigger": {
+ "timeAfterCreate": "P18M",
+ "timeBeforeExpiry": null
+ },
+ "action": {
+ "type": "Rotate"
+ }
+ }
+ ],
+ "attributes": {
+ "expiryTime": "P2Y"
+ }
+}
+```
+
+Rotate the key 28 days before expiration and set the new key to expire after one year.
+
+```json
+{
+ "lifetimeActions": [
+ {
+ "trigger": {
+ "timeAfterCreate": null,
+ "timeBeforeExpiry": "P28D"
+ },
+ "action": {
+ "type": "Rotate"
+ }
+ }
+ ],
+ "attributes": {
+ "expiryTime": "P1Y"
+ }
+}
+```
+
+Remove the key rotation policy (done by setting a blank policy)
+
+```json
+{
+ "lifetimeActions": [],
+ "attributes": {}
+}
+```
+
+## Rotation on demand
+
+Once a rotation policy is set for the key, you can also rotate the key on-demand. You must set a key rotation policy first.
+
+### Azure CLI
+```azurecli
+az keyvault key rotate --hsm-name <hsm-name> --name <key-name>
+```
+
+## Known issues
+
+While automatic key rotation is in preview, known issues will be tracked in this section.
+
+### `NoneType is not iterable` exception when Azure CLI receives an empty key rotation policy
+
+When no key rotation policy is configured for a key, or an existing key rotation policy is deleted, AzCLI may report this error. This will be patched in a future version of AzCLI.
+
+## Resources
+
+- [Managed HSM role management](role-management.md)
+- [Azure Data Encryption At Rest](../../security/fundamentals/encryption-atrest.md)
+- [Azure Storage Encryption](../../storage/common/storage-service-encryption.md)
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/overview.md
For pricing information, please see Managed HSM Pools section on [Azure Key Vaul
### Integrated with Azure and Microsoft PaaS/SaaS services -- Generate (or import using [BYOK](hsm-protected-keys-byok.md)) keys and use them to encrypt your data at rest in Azure services such as [Azure Storage](../../storage/common/customer-managed-keys-overview.md), [Azure SQL](../../azure-sql/database/transparent-data-encryption-byok-overview.md), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and [Customer Key for Microsoft 365](/microsoft-365/compliance/customer-key-set-up). For a more complete list of Azure services which work with Managed HSM, see [Data Encryption Models](/azure/security/fundamentals/encryption-models#supporting-services).
+- Generate (or import using [BYOK](hsm-protected-keys-byok.md)) keys and use them to encrypt your data at rest in Azure services such as [Azure Storage](../../storage/common/customer-managed-keys-overview.md), [Azure SQL](../../azure-sql/database/transparent-data-encryption-byok-overview.md), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and [Customer Key for Microsoft 365](/microsoft-365/compliance/customer-key-set-up). For a more complete list of Azure services which work with Managed HSM, see [Data Encryption Models](../../security/fundamentals/encryption-models.md#supporting-services).
### Uses same API and management interfaces as Key Vault
For pricing information, please see Managed HSM Pools section on [Azure Key Vaul
- See [Best Practices using Azure Key Vault Managed HSM](best-practices.md) - [Managed HSM Status](https://status.azure.com) - [Managed HSM Service Level Agreement](https://azure.microsoft.com/support/legal/sla/key-vault-managed-hsm/v1_0/)-- [Managed HSM region availability](https://azure.microsoft.com/global-infrastructure/services/?products=key-vault)
+- [Managed HSM region availability](https://azure.microsoft.com/global-infrastructure/services/?products=key-vault)
key-vault Role Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/role-management.md
Title: Managed HSM data plane role management - Azure Key Vault | Microsoft Docs
-description: Use this article to manage role assignments for your managed HSM
+description: Use this article to manage role assignments for your managed HSM.
> [!NOTE] > Key Vault supports two types of resource: vaults and managed HSMs. This article is about **Managed HSM**. If you want to learn how to manage a vault, please see [Manage Key Vault using the Azure CLI](../general/manage-with-cli2.md).
-For an overview of Managed HSM, see [What is Managed HSM?](overview.md). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+For an overview of Managed HSM, see [What is Managed HSM?](overview.md). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-This article show you how to manage roles for a Managed HSM data plane. To learn about Managed HSM access control model, see [Managed HSM access control](access-control.md).
+This article shows you how to manage roles for a Managed HSM data plane. To learn about Managed HSM access control model, see [Managed HSM access control](access-control.md).
To allow a security principal (such as a user, a service principal, group or a managed identity) to perform managed HSM data plane operations, they must be assigned a role that permits performing those operations. For example, if you want to allow an application to perform a sign operation using a key, it must be assigned a role that contains the "Microsoft.KeyVault/managedHSM/keys/sign/action" as one of the data actions. A role can be assigned at a specific scope. Managed HSM local RBAC supports two scopes, HSM-wide (`/` or `/keys`) and per key (`/keys/<keyname>`).
kinect-dk Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/troubleshooting.md
The Body Tracking SDK C# documentation is located [here](https://microsoft.githu
## Changes to contents of Body Tracking packages
-Both the MSI and NuGet packages no longer include the Microsoft Visual C++ Redistributable Package files. Download the latest package [here](https://docs.microsoft.com/cpp/windows/latest-supported-vc-redist).
+Both the MSI and NuGet packages no longer include the Microsoft Visual C++ Redistributable Package files. Download the latest package [here](/cpp/windows/latest-supported-vc-redist).
The NuGet package is back however it no longer includes Microsoft DirectML, or NVIDIA CUDA and TensorRT files. ## Next steps
-[More support information](support.md)
+[More support information](support.md)
load-balancer Load Balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-overview.md
Subscribe to the RSS feed and view the latest Azure Load Balancer feature update
## Next steps
-See [Create a public standard load balancer](quickstart-load-balancer-standard-public-portal.md) to get started with using a load balancer.
+* See [Create a public standard load balancer](quickstart-load-balancer-standard-public-portal.md) to get started with using a load balancer.
-For more information on Azure Load Balancer limitations and components, see [Azure Load Balancer components](./components.md) and [Azure Load Balancer concepts](./concepts.md)
+* For more information on Azure Load Balancer limitations and components, see [Azure Load Balancer components](./components.md) and [Azure Load Balancer concepts](./concepts.md)
+
+* [Learn module: Introduction to Azure Load Balancer](/learn/paths/intro-to-azure-application-delivery-services).
load-testing How To Move Between Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-move-between-regions.md
Azure Load Testing resources are region-specific and can't be moved across regio
To get started, you'll need to export and then modify an ARM template. You will also need to download artifacts for any exiting tests in the resource.
-1. Export the ARM template that contains settings and information for your Azure Load Testing resource by following the steps mentioned [here](/azure/azure-resource-manager/templates/export-template-portal).
+1. Export the ARM template that contains settings and information for your Azure Load Testing resource by following the steps mentioned [here](../azure-resource-manager/templates/export-template-portal.md).
1. Download the input artifacts for all the existing tests from the resource. Navigate to the **Tests** section in the resource and then click on the test name. **Download the input file** for the test by clicking the More button (...) on the right side of the latest test run.
Load and modify the template so you can create a new Azure Load Testing resource
### Create tests
-Once the resource is created in the target location, you can create new tests by following the steps mentioned [here](/azure/load-testing/quickstart-create-and-run-load-test#create_test).
+Once the resource is created in the target location, you can create new tests by following the steps mentioned [here](./quickstart-create-and-run-load-test.md#create_test).
1. You can refer to the test configuration in the config.yaml file of the input artifacts downloaded earlier.
Once the resource is created in the target location, you can create new tests by
If you are invoking the previous Azure Load Testing resource in a CI/CD workflow you can update the `loadTestResource` parameter in the [Azure Load testing task](/azure/devops/pipelines/tasks/test/azure-load-testing) or [Azure Load Testing action](https://github.com/marketplace/actions/azure-load-testing) of your workflow. > [!NOTE]
-> If you have configured any of your load test with secrets from Azure Key Vault, make sure to grant the new resource access to the Key Vault following the steps mentioned [here](/azure/load-testing/how-to-use-a-managed-identity?tabs=azure-portal#grant-access-to-your-azure-key-vault).
+> If you have configured any of your load test with secrets from Azure Key Vault, make sure to grant the new resource access to the Key Vault following the steps mentioned [here](./how-to-use-a-managed-identity.md?tabs=azure-portal#grant-access-to-your-azure-key-vault).
## Clean up source resources
After the move is complete, delete the Azure Load Testing resource from the sour
## Next steps -- Learn how to run high-scale load tests, see [Set up a high-scale load test](./how-to-high-scale-load.md).
+- Learn how to run high-scale load tests, see [Set up a high-scale load test](./how-to-high-scale-load.md).
load-testing How To Use A Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-use-a-managed-identity.md
Before you can add a user-assigned identity to an Azure Load Testing resource, y
# [Portal](#tab/azure-portal)
-1. Create a user-assigned managed identity by following the instructions mentioned [here](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity).
+1. Create a user-assigned managed identity by following the instructions mentioned [here](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity).
1. In the [Azure portal](https://portal.azure.com/), go to your Azure Load Testing resource.
You've now granted access to your Azure Load Testing resource to read the secret
## Next steps * To learn how to parameterize a load test by using secrets, see [Parameterize a load test](./how-to-parameterize-load-tests.md).
-* Learn how to [Manage users and roles in Azure Load Testing](./how-to-assign-roles.md).
+* Learn how to [Manage users and roles in Azure Load Testing](./how-to-assign-roles.md).
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/overview-what-is-azure-load-testing.md
Title: What is Azure Load Testing?
-description: 'Azure Load Testing is a fully managed load-testing service that enables developers to generate high-scale loads to optimize app performance.'
+description: 'Azure Load Testing is a fully managed load-testing service for generating high-scale loads by using existing JMeter scripts to optimize app performance.'
Previously updated : 11/30/2021 Last updated : 04/20/2022 adobe-target: true+ # What is Azure Load Testing Preview? Azure Load Testing Preview is a fully managed load-testing service that enables you to generate high-scale load. The service simulates traffic for your applications, regardless of where they're hosted. Developers, testers, and quality assurance (QA) engineers can use it to optimize application performance, scalability, or capacity.
-You can create a load test by using existing test scripts based on Apache JMeter, a popular open-source load and performance tool. For Azure-based applications, detailed resource metrics help you identify performance bottlenecks. Continuous integration and continuous deployment (CI/CD) workflows allow you to automate regression testing. Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
+You can create a load test by using existing test scripts based on Apache JMeter, a popular open-source load and performance tool. Azure Load Testing abstracts the infrastructure to run your JMeter script and load test your application. Get started by [creating and running a load test for a web application](./quickstart-create-and-run-load-test.md).
+
+For Azure-based applications, Azure Load Testing collects detailed resource metrics to help you [identify performance bottlenecks](#identify-performance-bottlenecks-by-using-high-scale-load-tests) across your Azure application components.
+
+You can [automate regression testing](#enable-automated-load-testing) by running load tests as part of your continuous integration and continuous deployment (CI/CD) workflow.
+
+Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## How does Azure Load Testing work?
+## Identify performance bottlenecks by using high-scale load tests
-Azure Load Testing test engines abstract the required infrastructure for running a high-scale load test. The test engines run the Apache JMeter script to simulate a large number of virtual users simultaneously accessing your application endpoints. To scale out the load test, you can configure the number of test engines.
+Performance problems often remain undetected until an application is under load. You can start a high-scale load test in the Azure portal to learn sooner how your application behaves under stress. While the test is running, the Azure Load Testing dashboard provides a live update of the client and server-side metrics.
-Azure Load Testing uses Apache JMeter version 5.4.3 for running load tests. You can use Apache JMeter plugins that are available on https://jmeter-plugins.org in your test script.
+After the load test finishes, you can use the dashboard to analyze the test results and identify performance bottlenecks. For Azure-hosted applications, the dashboard shows detailed resource metrics of the Azure application components. Get started with a tutorial to [identify performance bottlenecks for Azure-hosted applications](./tutorial-identify-bottlenecks-azure-portal.md).
-The application can be hosted anywhere: in Azure, on-premises, or in other clouds. During the load test, the service collects the following resource metrics and displays them in a dashboard:
+Azure Load Testing keeps a history of test runs and allows you to visually [compare multiple runs](./how-to-compare-multiple-test-runs.md) to detect performance regressions.
-- *Client-side metrics* give you details reported by the test engine. These details include the number of virtual users, the request response time, or the number of requests per second.
+You might also [download the test results](./how-to-export-test-results.md) for analysis in a third-party tool.
-- *Server-side metrics* provide information about your Azure application components. Azure Load Testing integrates with Azure Monitor, including Application Insights and Container insights, to capture details from the Azure services. Depending on the type of service, different metrics are available. For example, metrics can be for the number of database reads, the type of HTTP responses, or container resource consumption.
+## Enable automated load testing
-Azure Load Testing automatically incorporates best practices for Azure networking to help make sure that your tests run securely and reliably. Load tests are automatically stopped if the application endpoints or Azure components start throttling requests.
+You can integrate Azure Load Testing in your CI/CD pipeline at meaningful points during the development lifecycle. For example, you could automatically run a load test at the end of each sprint or in a staging environment to validate a release candidate build.
-Data stored in your Azure Load Testing resource is automatically encrypted with keys managed by Microsoft (service-managed keys). This data includes, for example, your Apache JMeter script.
+Get started with [adding load testing to your Azure Pipelines CI/CD workflow](./tutorial-cicd-azure-pipelines.md) or use our [Azure Load Testing GitHub action](./tutorial-cicd-github-actions.md).
+In the test configuration, you [specify pass/fail rules](./how-to-define-test-criteria.md) to catch performance regressions early in the development cycle. For example, when the average response time exceeds a threshold, the test should fail.
-> [!NOTE]
-> This image shows how Azure Load Testing uses Azure Monitor to capture metrics for app components. It isn't a comprehensive list of supported Azure resources.
+Azure Load Testing will automatically stop an automated load test in response to specific error conditions. You can also use the AutoStop listener in your Apache JMeter script. Automatically stopping safeguards you against failing tests further incurring costs, for example, because of an incorrectly configured endpoint URL.
-## Identify performance bottlenecks by using high-scale load tests
+You can trigger Azure Load Testing from Azure Pipelines or GitHub Actions workflows.
-Performance problems often remain undetected until an application is under load. You can start a high-scale load test in the Azure portal to learn sooner how your application behaves under stress. While the test is running, the Azure Load Testing dashboard provides a live update of the client and server-side metrics.
+## How does Azure Load Testing work?
-After the load test finishes, you can use the dashboard to analyze the test results and identify performance bottlenecks. For Azure-hosted applications, the dashboard shows detailed resource metrics of the Azure application components.
+Azure Load Testing test engines abstract the required infrastructure for [running a high-scale load test](./how-to-high-scale-load.md). The test engines run the Apache JMeter script to simulate a large number of virtual users simultaneously accessing your application endpoints. To scale out the load test, you can configure the number of test engines.
-Azure Load Testing keeps a history of test runs and allows you to visually compare multiple runs to detect performance regressions.
+Azure Load Testing uses Apache JMeter version 5.4.3 for running load tests. You can use Apache JMeter plugins that are available on https://jmeter-plugins.org in your test script.
-You might also download the test results for analysis in a third-party tool.
+The application can be hosted anywhere: in Azure, on-premises, or in other clouds. During the load test, the service collects the following resource metrics and displays them in a dashboard:
-## Enable automated load testing
+- *Client-side metrics* give you details reported by the test engine. These details include the number of virtual users, the request response time, or the number of requests per second.
-You can integrate Azure Load Testing in your CI/CD pipeline at meaningful points during the development lifecycle. For example, you could automatically run a load test at the end of each sprint or in a staging environment to validate a release candidate build.
+- *Server-side metrics* provide information about your Azure application components. Azure Load Testing integrates with Azure Monitor, including Application Insights and Container insights, to capture details from the Azure services. Depending on the type of service, different metrics are available. For example, metrics can be for the number of database reads, the type of HTTP responses, or container resource consumption.
-In the test configuration, you specify pass/fail rules to catch performance regressions early in the development cycle. For example, when the average response time exceeds a threshold, the test should fail.
+Azure Load Testing automatically incorporates best practices for Azure networking to help make sure that your tests run securely and reliably. Load tests are automatically stopped if the application endpoints or Azure components start throttling requests.
-Azure Load Testing will automatically stop an automated load test in response to specific error conditions. You can also use the AutoStop listener in your Apache JMeter script. Automatically stopping safeguards you against failing tests further incurring costs, for example, because of an incorrectly configured endpoint URL.
+Data stored in your Azure Load Testing resource is automatically encrypted with keys managed by Microsoft (service-managed keys). This data includes, for example, your Apache JMeter script.
-You can trigger Azure Load Testing from Azure Pipelines or GitHub Actions workflows.
+
+> [!NOTE]
+> The overview image shows how Azure Load Testing uses Azure Monitor to capture metrics for app components. Learn more about the [supported Azure resource types](./resource-supported-azure-resource-types.md).
## Next steps
logic-apps Azure Arc Enabled Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/azure-arc-enabled-logic-apps-overview.md
Title: Overview - Azure Arc-enabled Logic Apps
description: Learn about single-tenant Logic Apps workflows that can run anywhere that Kubernetes can run. ms.suite: integration-+ Previously updated : 05/25/2021
-#Customer intent: As a developer, I want to learn about automated Logic Apps workflows that can run anywhere that Kubernetes can run.
Last updated : 04/20/2022
+#Customer intent: As a developer, I want to learn about automated Azure Arc-enabled logic app workflows that can run anywhere that Kubernetes can run.
# What is Azure Arc-enabled Logic Apps? (Preview)
This table provides a high-level comparison between the capabilities in the curr
**Single-tenant Logic Apps (Standard)** :::column-end::: :::column:::
- **Standalone containers**
+ **Standalone containers** <br><br>**Note**: Unsupported for workflows in production environments. For fully supported containers, [create Azure Arc-enabled Logic Apps workflows](azure-arc-enabled-logic-apps-create-deploy-workflows.md) instead.
:::column-end::: :::column::: **Azure Arc**
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
In this example, the workflow runs when the Request trigger receives an inbound
1. After the details pane opens, on the **Parameters** tab, find the **HTTP POST URL** property. To copy the generated URL, select the **Copy Url** (copy file icon), and save the URL somewhere else for now. The URL follows this format:
- `http://<logic-app-name>.azurewebsites.net:443/api/<workflow-name>/triggers/manual/invoke?api-version=2020-05-01w&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<shared-access-signature>`
+ `http://<logic-app-name>.azurewebsites.net:443/api/<workflow-name>/triggers/manual/invoke?api-version=2020-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<shared-access-signature>`
![Screenshot that shows the designer with the Request trigger and endpoint URL in the "HTTP POST URL" property.](./media/create-single-tenant-workflows-azure-portal/find-request-trigger-url.png)
For a stateful workflow, after each workflow run, you can view the run history,
> If the most recent run status doesn't appear, on the **Overview** pane toolbar, select **Refresh**. > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
+ The following table shows the possible final statuses that each workflow run can have and show in the portal:
+
| Run status | Description | ||-| | **Aborted** | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
- | **Canceled** | The run was triggered and started but received a cancel request. |
+ | **Cancelled** | The run was triggered and started but received a cancel request. |
| **Failed** | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. | | **Running** | The run was triggered and is in progress, but this status can also appear for a run that is throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Tip**: If you set up [diagnostics logging](monitor-logic-apps-log-analytics.md), you can get information about any throttle events that happen. | | **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. |
For a stateful workflow, after each workflow run, you can view the run history,
![Screenshot that shows the run details view with the status for each step in the workflow.](./media/create-single-tenant-workflows-azure-portal/review-run-details.png)
- Here are the possible statuses that each step in the workflow can have:
+ The following table shows the possible statuses that each workflow action can have and show in the portal:
| Action status | Description | ||-| | **Aborted** | The action stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
- | **Canceled** | The action was running but received a cancel request. |
+ | **Cancelled** | The action was running but received a cancel request. |
| **Failed** | The action failed. | | **Running** | The action is currently running. | | **Skipped** | The action was skipped because its `runAfter` conditions weren't met, for example, a preceding action failed. Each action has a `runAfter` object where you can set up conditions that must be met before the current action can run. |
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
To test your logic app, follow these steps to start a debugging session, and fin
![Screenshot that shows the workflow's overview page with run status and history](./media/create-single-tenant-workflows-visual-studio-code/post-trigger-call.png)
+ The following table shows the possible final statuses that each workflow run can have and show in Visual Studio Code:
+ | Run status | Description | ||-| | **Aborted** | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
- | **Canceled** | The run was triggered and started but received a cancellation request. |
+ | **Cancelled** | The run was triggered and started but received a cancellation request. |
| **Failed** | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. | | **Running** | The run was triggered and is in progress, but this status can also appear for a run that is throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Tip**: If you set up [diagnostics logging](monitor-logic-apps-log-analytics.md), you can get information about any throttle events that happen. | | **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. |
To test your logic app, follow these steps to start a debugging session, and fin
> from a longer trigger name or action name that causes the underlying Uniform Resource Identifier (URI) to exceed > the default character limit. For more information, see ["400 Bad Request"](#400-bad-request).
- Here are the possible statuses that each step in the workflow can have:
+ The following table shows the possible statuses that each workflow action can have and show in Visual Studio Code:
| Action status | Description | ||-| | **Aborted** | The action stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
- | **Canceled** | The action was running but received a request to cancel. |
+ | **Cancelled** | The action was running but received a request to cancel. |
| **Failed** | The action failed. | | **Running** | The action is currently running. | | **Skipped** | The action was skipped because the immediately preceding action failed. An action has a `runAfter` condition that requires that the preceding action finishes successfully before the current action can run. |
machine-learning Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-customer-managed-keys.md
When you __use a customer-managed key__, these resources are _in your Azure subs
These Microsoft-managed resources are located in a new Azure resource group is created in your subscription. This group is in addition to the resource group for your workspace. This resource group will contain the Microsoft-managed resources that your key is used with. The resource group will be named using the formula of `<Azure Machine Learning workspace resource group name><GUID>`. > [!TIP]
-> * The [__Request Units__](/azure/cosmos-db/request-units) for the Azure Cosmos DB automatically scale as needed.
+> * The [__Request Units__](../cosmos-db/request-units.md) for the Azure Cosmos DB automatically scale as needed.
> * If your Azure Machine Learning workspace uses a private endpoint, this resource group will also contain a Microsoft-managed Azure Virtual Network. This VNet is used to secure communications between the managed services and the workspace. You __cannot provide your own VNet for use with the Microsoft-managed resources__. You also __cannot modify the virtual network__. For example, you cannot change the IP address range that it uses. > [!IMPORTANT]
Azure Machine Learning uses compute resources to train and deploy machine learni
| Compute | Encryption | | -- | -- | | Azure Container Instance | Data is encrypted by a Microsoft-managed key or a customer-managed key.</br>For more information, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md). |
-| Azure Kubernetes Service | Data is encrypted by a Microsoft-managed key or a customer-managed key.</br>For more information, see [Bring your own keys with Azure disks in Azure Kubernetes Services](/azure/aks/azure-disk-customer-managed-keys). |
+| Azure Kubernetes Service | Data is encrypted by a Microsoft-managed key or a customer-managed key.</br>For more information, see [Bring your own keys with Azure disks in Azure Kubernetes Services](../aks/azure-disk-customer-managed-keys.md). |
| Azure Machine Learning compute instance | Local scratch disk is encrypted if the `hbi_workspace` flag is enabled for the workspace. | | Azure Machine Learning compute cluster | OS disk encrypted in Azure Storage with Microsoft-managed keys. Temporary disk is encrypted if the `hbi_workspace` flag is enabled for the workspace. |
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Main changes:
- Azure CLI to version 2.33.1 - Fixed jupyterhub access issue using public ip address - Redesign of Conda environments - we're continuing with alignment and refining the Conda environments so we created:
- - `azureml_py38`: environment based on Python 3.8 with preinstalled [AzureML SDK](/python/api/overview/azure/ml/?view=azure-ml-py&preserve-view=true) containing also [AutoML](/azure/machine-learning/concept-automated-ml) environment
+ - `azureml_py38`: environment based on Python 3.8 with preinstalled [AzureML SDK](/python/api/overview/azure/ml/?view=azure-ml-py&preserve-view=true) containing also [AutoML](../concept-automated-ml.md) environment
- `azureml_py38_PT_TF`: additional azureml_py38 environment, preinstalled with latest TensorFlow and PyTorch - `py38_default`: default system environment based on Python 3.8 - We have removed `azureml_py36_tensorflow`, `azureml_py36_pytorch`, `py38_tensorflow` and `py38_pytorch` environments.
Main changes:
- Further `Log4j` vulnerability mitigation - although not used, we moved all `log4j` to version v2, we have removed old log4j jars1.0 and moved `log4j` version 2.0 jars. - Azure CLI to version 2.33.1 - Redesign of Conda environments - we're continuing with alignment and refining the Conda environments so we created:
- - `azureml_py38`: environment based on Python 3.8 with preinstalled [AzureML SDK](/python/api/overview/azure/ml/?view=azure-ml-py&preserve-view=true) containing also [AutoML](/azure/machine-learning/concept-automated-ml) environment
+ - `azureml_py38`: environment based on Python 3.8 with preinstalled [AzureML SDK](/python/api/overview/azure/ml/?view=azure-ml-py&preserve-view=true) containing also [AutoML](../concept-automated-ml.md) environment
- `azureml_py38_PT_TF`: complementary environment `azureml_py38` with preinstalled with latest TensorFlow and PyTorch - `py38_default`: default system environment based on Python 3.8 - we removed `azureml_py36_tensorflow`, `azureml_py36_pytorch`, `py38_tensorflow` and `py38_pytorch` environments.
Data Science Virtual Machine images for [Ubuntu 18.04](https://azuremarketplace.
### Default Browser for Windows updated
-Earlier, the default browser was set to Internet Explorer. Users are now prompted to choose a default browser when they first sign in.
+Earlier, the default browser was set to Internet Explorer. Users are now prompted to choose a default browser when they first sign in.
machine-learning How To Add Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-add-users.md
Send the following to your labelers, after filling in your workspace and project
1. Follow the steps on the web page after you accept. Don't worry if at the end you're on a page that says you don't have any apps. 1. Open [Azure Machine Learning studio](https://ml.azure.com). 1. Use the dropdown to select the workspace **\<workspace-name\>**.
-1. Select the project **\<project-name\>**.
-1. Select **Start labeling** at the bottom of the page.
+1. Select the **Label data** tool for **\<project-name\>**.
+ :::image type="content" source="media/how-to-add-users/label-data.png" alt-text="Screenshot showing the label data tool.":::
1. For more information about how to label data, see [Labeling images and text documents](how-to-label-data.md). ## Next steps
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
The following table is a summary of Azure Machine Learning activities and the pe
1: If you receive a failure when trying to create a workspace for the first time, make sure that your role allows `Microsoft.MachineLearningServices/register/action`. This action allows you to register the Azure Machine Learning resource provider with your Azure subscription.
-2: When attaching an AKS cluster, you also need to the [Azure Kubernetes Service Cluster Admin Role](/azure/role-based-access-control/built-in-roles#azure-kubernetes-service-cluster-admin-role) on the cluster.
+2: When attaching an AKS cluster, you also need to the [Azure Kubernetes Service Cluster Admin Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) on the cluster.
### Create a workspace using a customer-managed key When using a customer-managed key (CMK), an Azure Key Vault is used to store the key. The user or service principal used to create the workspace must have owner or contributor access to the key vault.
-Within the key vault, the user or service principal must have create, get, delete, and purge access to the key through a key vault access policy. For more information, see [Azure Key Vault security](/azure/key-vault/general/security-features#controlling-access-to-key-vault-data).
+Within the key vault, the user or service principal must have create, get, delete, and purge access to the key through a key vault access policy. For more information, see [Azure Key Vault security](../key-vault/general/security-features.md#controlling-access-to-key-vault-data).
### User-assigned managed identity with Azure ML compute cluster
Here are a few things to be aware of while you use Azure role-based access contr
- [Enterprise security overview](concept-enterprise-security.md) - [Virtual network isolation and privacy overview](how-to-network-security-overview.md) - [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md)-- [Resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftmachinelearningservices)
+- [Resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftmachinelearningservices)
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-kubernetes.md
Azure Machine Learning can deploy trained machine learning models to Azure Kuber
The AML control plane does not talk to this Public IP. It talks to the AKS control plane for deployments. -- To attach an AKS cluster, the service principal/user performing the operation must be assigned the __Owner or contributor__ Azure role-based access control (Azure RBAC) role on the Azure resource group that contains the cluster. The service principal/user must also be assigned [Azure Kubernetes Service Cluster Admin Role](/azure/role-based-access-control/built-in-roles#azure-kubernetes-service-cluster-admin-role) on the cluster.
+- To attach an AKS cluster, the service principal/user performing the operation must be assigned the __Owner or contributor__ Azure role-based access control (Azure RBAC) role on the Azure resource group that contains the cluster. The service principal/user must also be assigned [Azure Kubernetes Service Cluster Admin Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) on the cluster.
- If you **attach** an AKS cluster, which has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AML control plane IP ranges for the AKS cluster. The AML control plane is deployed across paired regions and deploys inference pods on the AKS cluster. Without access to the API server, the inference pods cannot be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.
To resolve this problem, create/attach the cluster by using the `load_balancer_t
* [Use Azure RBAC for Kubernetes authorization](../aks/manage-azure-rbac.md) * [How and where to deploy a model](how-to-deploy-and-where.md)
-* [Deploy a model to an Azure Kubernetes Service cluster](how-to-deploy-azure-kubernetes-service.md)
+* [Deploy a model to an Azure Kubernetes Service cluster](how-to-deploy-azure-kubernetes-service.md)
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
In the [customer-managed keys concepts article](concept-customer-managed-keys.md
## Create Azure Key Vault
-To create the key vault, see [Create a key vault](/azure/key-vault/general/quick-create-portal). When creating Azure Key Vault, you must enable __soft delete__ and __purge protection__.
+To create the key vault, see [Create a key vault](../key-vault/general/quick-create-portal.md). When creating Azure Key Vault, you must enable __soft delete__ and __purge protection__.
### Create a key
To create the key vault, see [Create a key vault](/azure/key-vault/general/quick
> If you plan to use a user-assigned managed identity for your workspace, the managed identity must also be assigned these roles and access policies. > > For more information, see the following articles:
-> * [Provide access to key vault keys, certificates, and secrets](/azure/key-vault/general/rbac-guide)
-> * [Assign a key vault access policy](/azure/key-vault/general/assign-access-policy)
+> * [Provide access to key vault keys, certificates, and secrets](../key-vault/general/rbac-guide.md)
+> * [Assign a key vault access policy](../key-vault/general/assign-access-policy.md)
> * [Use managed identities with Azure Machine Learning](how-to-use-managed-identities.md) 1. From the [Azure portal](https://portal.azure.com), select the key vault instance. Then select __Keys__ from the left.
For examples of creating the workspace with a customer-managed key, see the foll
Once the workspace has been created, you'll notice that Azure resource group is created in your subscription. This group is in addition to the resource group for your workspace. This resource group will contain the Microsoft-managed resources that your key is used with. The resource group will be named using the formula of `<Azure Machine Learning workspace resource group name><GUID>`. It will contain an Azure Cosmos DB instance, Azure Storage Account, and Azure Cognitive Search. > [!TIP]
-> * The [__Request Units__](/azure/cosmos-db/request-units) for the Azure Cosmos DB instance automatically scale as needed.
+> * The [__Request Units__](../cosmos-db/request-units.md) for the Azure Cosmos DB instance automatically scale as needed.
> * If your Azure Machine Learning workspace uses a private endpoint, this resource group will also contain a Microsoft-managed Azure Virtual Network. This VNet is used to secure communications between the managed services and the workspace. You __cannot provide your own VNet for use with the Microsoft-managed resources__. You also __cannot modify the virtual network__. For example, you cannot change the IP address range that it uses. > [!IMPORTANT]
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
Create a compute cluster that will autoscale between zero and four nodes:
1. Still in the **Compute** section, in the top tab, select **Compute clusters**. 1. Select **+New** to create a new compute cluster.
-1. Keep all the defaults on the first page, select **Next**.
+1. Keep all the defaults on the first page, select **Next**. If you don't see any available compute, you'll need to request a quota increase. Learn more about [managing and increasing quotas](how-to-manage-quotas.md).
1. Name the cluster **cpu-cluster**. If this name already exists, add your initials to the name to make it unique. 1. Leave the **Minimum number of nodes** at 0. 1. Change the **Maximum number of nodes** to 4 if possible. Depending on your settings, you may have a smaller limit.
managed-grafana How To Monitor Managed Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-monitor-managed-grafana-workspace.md
You can create up to five different diagnostic settings to send different logs t
| Log Analytics workspace | Send data to a Log Analytics workspace | Select the **subscription** containing an existing Log Analytics workspace, then select the **Log Analytics workspace** | | Storage account | Archive data to a storage account | Select the **subscription** containing an existing storage account, then select the **storage account**. Only storage accounts in the same region as the Grafana workspace are displayed in the dropdown menu. | | Event hub | Stream to an event hub | Select a **subscription** and an existing Azure Event Hub **namespace**. Optionally also choose an existing **event hub**. Lastly, choose an **event hub policy** from the list. Only event hubs in the same region as the Grafana workspace are displayed in the dropdown menu. |
- | Partner solution | Send to a partner solution | Select a **subscription** and a **destination**. For more information about available destinations, go to [partner destinations](/azure/azure-monitor/partners). |
+ | Partner solution | Send to a partner solution | Select a **subscription** and a **destination**. For more information about available destinations, go to [partner destinations](../azure-monitor/partners.md). |
:::image type="content" source="media/managed-grafana-monitoring-settings.png" alt-text="Screenshot of the Azure platform. Diagnostic settings configuration.":::
Now that you've configured your diagnostic settings, Azure will stream all new e
> [!div class="nextstepaction"] > [Grafana UI](./grafana-app-ui.md)
-> [How to share an Azure Managed Grafana workspace](./how-to-share-grafana-workspace.md)
+> [How to share an Azure Managed Grafana workspace](./how-to-share-grafana-workspace.md)
managed-grafana Quickstart Managed Grafana Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-portal.md
Previously updated : 03/31/2022 Last updated : 04/18/2022 # Quickstart: Create a workspace in Azure Managed Grafana Preview using the Azure portal
An Azure account with an active subscription. [Create an account for free](https
1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-1. In the upper-left corner of the home page, select **Create a resource**. In the **Search services and Marketplace** box, enter *Grafana* and select **Enter**.
+1. In the upper-left corner of the home page, select **Create a resource**. In the **Search services and marketplace** box, enter *Managed Grafana* and select **Azure Managed Grafana**.
-1. Select **Grafana Workspaces** from the search results, and then **Create**.
+ :::image type="content" source="media/managed-grafana-quickstart-marketplace.png" alt-text="Screenshot of the Azure platform. Find Azure Managed Grafana in the marketplace." lightbox="media/managed-grafana-quickstart-marketplace-expanded.png":::
- :::image type="content" source="media/managed-grafana-quickstart-portal-grafana-create.png" alt-text="Screenshot of the Azure portal. Create Grafana workspace.":::
+1. Select **Create**.
1. In the Create Grafana Workspace pane, enter the following settings.
marketplace Azure App Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-properties.md
Previously updated : 06/01/2021 Last updated : 04/08/2022 # Configure Azure application offer properties
On the **Properties** page, youΓÇÖll define the categories applicable to your of
Under **Categories**, select the **Categories** link and then choose at least one and up to two categories for grouping your offer into the appropriate commercial marketplace search areas. Select up to two subcategories for each primary and secondary category. If no subcategory is applicable to your offer, select **Not applicable**.
+If youΓÇÖre working with a Microsoft product engineering team, select an option from the list to enable product specific certification and a custom Azure portal experience, such as Microsoft Sentinel Solutions.
+ ## Provide terms and conditions Under **Legal**, provide terms and conditions for your offer. You have two options:
marketplace Isv Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-customer.md
Use this page to define private offer terms, notification contacts, and pricing
- **Customer Information** ΓÇô Specify the billing account for the customer receiving this private offer. This will only be available to the configured customer billing account and the customer will need to be an owner or contributor or signatory on the billing account to accept the offer. > [!NOTE]
- > Customers can find their billing account in the [Azure portal ](https://aka.ms/PrivateOfferAzurePortal) under **Cost Management + Billing** > **Properties** > **ID**. A user in the customer organization should have access to the billing account to see the ID in Azure Portal. See [Billing account scopes in the Azure portal](/azure/cost-management-billing/manage/view-all-accounts).
+ > Customers can find their billing account in the [Azure portal ](https://aka.ms/PrivateOfferAzurePortal) under **Cost Management + Billing** > **Properties** > **ID**. A user in the customer organization should have access to the billing account to see the ID in Azure Portal. See [Billing account scopes in the Azure portal](../cost-management-billing/manage/view-all-accounts.md).
:::image type="content" source="media/isv-customer/customer-properties.png" alt-text="Shows the offer Properties tab in Partner Center.":::
Use this page to define private offer terms, notification contacts, and pricing
- **Terms and conditions** ΓÇô Optionally, upload a PDF with terms and conditions your customer must accept as part of the private offer. > [!NOTE]
- > Your terms and conditions must adhere to Microsoft supported billing models, offer types, and the [Microsoft Publisher Agreement](https://aka.ms/PrivateOfferPublisherAgreement).
+ > Your terms and conditions must adhere to Microsoft supported billing models, offer types, and the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement).
- **Notification Contacts** ΓÇô Provide up to five emails in your organization as **Notification Contacts** to receive email updates on the status of your private offer. These emails are sent when your offer status changes to **Pending acceptance**, **Accepted**, or **Expired**. You must also provide a **Prepared by** email address, which will be displayed to the customer in the private offer listing in the Azure portal.
The payout amount and agency fee that Microsoft charges is based on the private
## Next steps -- [Frequently Asked Questions](isv-customer-faq.yml) about configuring ISV to customer private offers
+- [Frequently Asked Questions](isv-customer-faq.yml) about configuring ISV to customer private offers
marketplace Private Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/private-plans.md
Private plans let publishers offer private, customized solutions to targeted cus
Private plans let publishers take advantage of the scale and global availability of a public marketplace, with the flexibility and control needed to negotiate and deliver custom deals and configurations. Enterprises can now buy and sell in ways they expect. >[!Note]
->Private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program. For details, see [ISV to CSP partner private offers](/azure/marketplace/isv-csp-reseller).
+>Private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program. For details, see [ISV to CSP partner private offers](./isv-csp-reseller.md).
## Create private plans
Private plans will also appear in search results and can be deployed via command
## Next steps To start using private offers, follow the steps in the [Private SKUs and Plans]() guide.
->
+>
marketplace Test Saas Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/test-saas-plan.md
This article explains how to test a software as a service (SaaS) offer in previe
Here are some general guidelines to be aware of when youΓÇÖre testing your offer. -- If your SaaS offer supports metered billing using the commercial marketplace metering service, review and follow the testing best practices detailed in [Marketplace metered billing APIs](/azure/marketplace/partner-center-portal/saas-metered-billing).-- Review and follow the testing instructions in [Implementing a webhook on the SaaS service](/azure/marketplace/partner-center-portal/pc-saas-fulfillment-webhook#development-and-testing) to ensure your offer is successfully integrated with the APIs.
+- If your SaaS offer supports metered billing using the commercial marketplace metering service, review and follow the testing best practices detailed in [Marketplace metered billing APIs](./partner-center-portal/saas-metered-billing.md).
+- Review and follow the testing instructions in [Implementing a webhook on the SaaS service](./partner-center-portal/pc-saas-fulfillment-webhook.md#development-and-testing) to ensure your offer is successfully integrated with the APIs.
- If the Offer validation step resulted in warnings, a **View validation report** link appears on the **Offer overview** page. Be sure to review the report and address the issues before you select the **Go live** button. Otherwise, certification will most likely fail and delay your offer from going Live. - If you need to make changes after previewing and testing the offer, you can edit and resubmit to publish a new preview. For more information, see [Update an existing offer in the commercial marketplace](update-existing-offer.md).
For more details about sending metered usage events, see [Marketplace metered bi
When you complete your tests, you can do the following: - [Unsubscribe from and deactivate your test plan](test-saas-unsubscribe.md).-- [Create a plan](create-new-saas-offer-plans.md) in your production offer with the prices you want to charge customers and [publish the production offer live](test-publish-saas-offer.md).
+- [Create a plan](create-new-saas-offer-plans.md) in your production offer with the prices you want to charge customers and [publish the production offer live](test-publish-saas-offer.md).
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-server-parameters.md
If you receive an error similar to `Row size too large (> 8126)`, consider turni
You can set this parameter at a session level, by using `init_connect`. To set `innodb_strict_mode` at a session level, refer to [setting parameter not listed](./howto-server-parameters.md#setting-parameters-not-listed). > [!NOTE]
-> If you have a read replica server, setting `innodb_strict_mode` to `OFF` at the session-level on a source server will break the replication. We suggest keeping the parameter set to `OFF` if you have read replicas.
+> If you have a read replica server, setting `innodb_strict_mode` to `OFF` at the session-level on a source server will break the replication. We suggest keeping the parameter set to `ON` if you have read replicas.
### sort_buffer_size
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
For migration scenarios, use the [Azure Database Migration Service](https://azur
The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) on the source server isn't replicated. In addition, changes to accounts and permissions on the source server aren't replicated. If you create an account on the source server and this account needs to access the replica server, manually create the same account on the replica server. To understand what tables are contained in the system database, see the [MySQL manual](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html). ### Data-in replication not supported on HA enabled servers
-Configuring Data-in replication for zone-redundant high availability servers isn't supported. On servers were HA is enabled, the stored procedures for replication `mysql.az_replication_*` won't be available. You can't use HA servers as source server when you use binary log files position-based replication.
+It is not supported to configure Data-in replication for servers which have high availability (HA) option enabled. On HA enabled servers, the stored procedures for replication `mysql.az_replication_*` won't be available.
+> [!Tip]
+>If you are using HA server as a source server, MySQL native binary log (binlog) file position-based replication would fail, when failover happens on the server. If replica server supports GTID based replication, we should configure GTID based replication.
### Filtering
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-server-parameters.md
If you receive an error similar to "Row size too large (> 8126)", you may want t
This parameter can be set at a session level using `init_connect`. To set **innodb_strict_mode** at session level, refer to [setting parameter not listed](./how-to-configure-server-parameters-portal.md#setting-non-modifiable-server-parameters). > [!NOTE]
-> If you have a read replica server, setting **innodb_strict_mode** to OFF at the session-level on a source server will break the replication. We suggest keeping the parameter set to OFF if you have read replicas.
+> If you have a read replica server, setting **innodb_strict_mode** to OFF at the session-level on a source server will break the replication. We suggest keeping the parameter set to ON if you have read replicas.
### time_zone
network-watcher Network Watcher Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitoring-overview.md
The information is helpful when planning future resource deployments.
### Analyze traffic to or from a network security group
-Network security groups (NSG) allow or deny inbound or outbound traffic to a network interface in a VM. The *NSG flow log* capability allows you to log the source and destination IP address, port, protocol, and whether traffic was allowed or denied by an NSG. You can analyze logs using a variety of tools, such as PowerBI and the *traffic analytics* capability. Traffic analytics provides rich visualizations of data written to NSG flow logs. The following picture shows some of the information and visualizations that traffic analytics presents from NSG flow log data:
+Network security groups (NSG) allow or deny inbound or outbound traffic to a network interface in a VM. The *NSG flow log* capability allows you to log the source and destination IP address, port, protocol, and whether traffic was allowed or denied by an NSG. You can analyze logs using a variety of tools, such as Power BI and the *traffic analytics* capability. Traffic analytics provides rich visualizations of data written to NSG flow logs. The following picture shows some of the information and visualizations that traffic analytics presents from NSG flow log data:
![Traffic analytics](./media/network-watcher-monitoring-overview/traffic-analytics.png)
When you create or update a virtual network in your subscription, Network Watche
## Next steps
-You now have an overview of Azure Network Watcher. To get started using Network Watcher, diagnose a common communication problem to and from a virtual machine using IP flow verify. To learn how, see the [Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md) quickstart.
+* You now have an overview of Azure Network Watcher. To get started using Network Watcher, diagnose a common communication problem to and from a virtual machine using IP flow verify. To learn how, see the [Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md) quickstart.
+
+* [Learn module: Introduction to Azure Network Watcher](/learn/modules/intro-to-azure-network-watcher).
notification-hubs Encrypt At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/encrypt-at-rest.md
Notification Hubs encryption protects customer's data to help you to meet organi
## Data encryption at rest in Azure
-Encryption at rest provides data protection for stored data (at rest). For detailed information about data encryption at rest in Microsoft Azure, see [Azure Data Encryption-at-Rest](/azure/security/fundamentals/encryption-atrest).
+Encryption at rest provides data protection for stored data (at rest). For detailed information about data encryption at rest in Microsoft Azure, see [Azure Data Encryption-at-Rest](../security/fundamentals/encryption-atrest.md).
## About Azure Notification Hubs encryption
default, and there is no need for modifications to your code or applications in
## Next steps - [Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption)-- [Azure Data Encryption-at-Rest](/azure/security/fundamentals/encryption-atrest)-- [What is Azure Key Vault?](/azure/key-vault/general/overview)
+- [Azure Data Encryption-at-Rest](../security/fundamentals/encryption-atrest.md)
+- [What is Azure Key Vault?](../key-vault/general/overview.md)
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
Before deploying the application on OpenShift, we are going to run it locally to
* Database name: `todos_db` * SA password: `Passw0rd!`
-To create the database, follow the steps in [Quickstart: Create an Azure SQL Database single database](/azure/azure-sql/database/single-database-create-quickstart?tabs=azure-portal), but use the following substitutions.
+To create the database, follow the steps in [Quickstart: Create an Azure SQL Database single database](../azure-sql/database/single-database-create-quickstart.md?tabs=azure-portal), but use the following substitutions.
* For **Database name** use `todos_db`. * For **Password** use `Passw0rd!`.
You can learn more from references used in this guide:
* [Red Hat JBoss Enterprise Application Platform](https://www.redhat.com/en/technologies/jboss-middleware/application-platform) * [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/) * [JBoss EAP Helm Charts](https://jbossas.github.io/eap-charts/)
-* [JBoss EAP Bootable JAR](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html-single/using_jboss_eap_xp_3.0.0/index#the-bootable-jar_default)
+* [JBoss EAP Bootable JAR](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html-single/using_jboss_eap_xp_3.0.0/index#the-bootable-jar_default)
orbital Space Partner Program Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/space-partner-program-overview.md
To join the program, we ask partners to commit to:
## Next steps -- [Sign up for MS Startups for access to credits and support](https://startups.microsoft.com/)
+- [Sign up for MS Startups for access to credits and support](https://partner.microsoft.com/?msclkid=0ea9c859bb5611ec801255d300e7c499)
- [Downlink data from satellites using Azure Orbital](overview.md) - [Analyze space data on Azure](/azure/architecture/example-scenario/data/geospatial-data-processing-analytics-azure) - [Drive insights with geospatial partners on Azure ΓÇô ESRI and visualize with Power BI](https://azuremarketplace.microsoft.com/en/marketplace/apps/esri.arcgis-enterprise?tab=Overview) - [Use the Azure Software Radio Developer VM to jump start your software radio development](https://github.com/microsoft/azure-software-radio)-- [List your app on the Azure Marketplace](/azure/marketplace/determine-your-listing-type#free-trial)
+- [List your app on the Azure Marketplace](../marketplace/determine-your-listing-type.md#free-trial)
purview Catalog Private Link Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-name-resolution.md
The DNS resource records for Contoso-Purview, when resolved in the virtual netwo
### Use existing Azure Private DNS Zones
-During the deployment of Azure purview private endpoints, you can choose _Private DNS integration_ using existing Azure Private DNS zones. This is common case for organizations where private endpoint is used for other services in Azure. In this case, during the deployment of private endpoints, make sure you select the existing DNS zones instead of creating new ones.
+During the deployment of Microsft Purview private endpoints, you can choose _Private DNS integration_ using existing Azure Private DNS zones. This is common case for organizations where private endpoint is used for other services in Azure. In this case, during the deployment of private endpoints, make sure you select the existing DNS zones instead of creating new ones.
This scenario also applies if your organization uses a central or hub subscription for all Azure Private DNS Zones.
purview Concept Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-network.md
Last updated 03/04/2022+ # Microsoft Purview network architecture and best practices
Here are some best practices:
:::image type="content" source="media/concept-best-practices/network-self-hosted-runtime.png" alt-text="Screenshot that shows the connection flow between Microsoft Purview, a self-hosted runtime, and data sources."lightbox="media/concept-best-practices/network-self-hosted-runtime.png":::
- 1. A manual or automatic scan is triggered. Azure purview connects to Azure Key Vault to retrieve the credential to access a data source.
+ 1. A manual or automatic scan is triggered. Microsoft Purview connects to Azure Key Vault to retrieve the credential to access a data source.
2. The scan is initiated from the Microsoft Purview data map through a self-hosted integration runtime.
purview Concept Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-security.md
Microsoft Purview allows you to use any of the following options to extract meta
:::image type="content" source="media/concept-best-practices/security-self-hosted-runtime.png" alt-text="Screenshot that shows the connection flow between Microsoft Purview, a self-hosted runtime, and data sources."lightbox="media/concept-best-practices/security-self-hosted-runtime.png":::
- 1. A manual or automatic scan is triggered. Azure purview connects to Azure Key Vault to retrieve the credential to access a data source.
+ 1. A manual or automatic scan is triggered. Microsft Purview connects to Azure Key Vault to retrieve the credential to access a data source.
2. The scan is initiated from the Microsoft Purview data map through a self-hosted integration runtime.
Microsoft Purview allows you to use any of the following options to extract meta
:::image type="content" source="media/concept-best-practices/security-self-hosted-runtime-on-premises.png" alt-text="Screenshot that shows the connection flow between Microsoft Purview, an on-premises self-hosted runtime, and data sources in on-premises network."lightbox="media/concept-best-practices/security-self-hosted-runtime-on-premises.png":::
- 1. A manual or automatic scan is triggered. Azure purview connects to Azure Key Vault to retrieve the credential to access a data source.
+ 1. A manual or automatic scan is triggered. Microsft Purview connects to Azure Key Vault to retrieve the credential to access a data source.
2. The scan is initiated through the on-premises self-hosted integration runtime.
purview Concept Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-self-service-data-access-policy.md
With self-service data access workflow, data consumers can not only find data as
A default self-service data access workflow template is provided with every Microsoft Purview account.The default template can be amended to add more approvers and/or set the approver's email address. For more details refer [Create and enable self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md).
-Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Azure purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **data use governance**. The pre-requisites mentioned within the [data use governance](./how-to-enable-data-use-governance.md#prerequisites) have to be satisfied.
+Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Microsft Purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **data use governance**. The pre-requisites mentioned within the [data use governance](./how-to-enable-data-use-governance.md#prerequisites) have to be satisfied.
## Next steps
purview How To Enable Data Use Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-governance.md
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-*Data use governance* (DUG) is an option (enabled/disabled) that gets displayed when registering a data source in Microsoft Purview. Its purpose is to make that data source available in the policy authoring experience of the Microsoft Purview studio. In other words, access policies can only be written on data sources that have been previously registered with the DUG toggle set to enable.
+*Data use governance* (DUG) is an option (enabled/disabled) in data source registration in Microsoft Purview. This option allows you to enable Microsoft Purview to manage data access for your resources.
+
+Currently, a data owner can enable DUG on a data source to make data access management available to Microsoft Purview through these methods:
+
+* [Data owner access policies](concept-data-owner-policies.md) - access policies created by data owners within Microsoft Purview to grant permissions to a data source.
+* [Self-service access policies](concept-self-service-data-access-policy.md) - access policies generated by Microsoft Purview after a [self-service access request](how-to-request-access.md) is approved.
+
+To be able to create any data policy on a resource, DUG must first be enabled on that resource. This article will explain how to enable DUG on your resources in Microsoft Purview.
+
+>[!IMPORTANT]
+>Because data use governance directly affects access to your data, it directly affects your data security. Review [**additional considerations**](#additional-considerations-related-to-data-use-governance) and [**best practices**](#data-use-governance-best-practices) below before enabling DUG in your environment.
## Prerequisites [!INCLUDE [Access policies generic configuration](./includes/access-policies-configuration-generic.md)]
purview How To Lineage Sql Server Integration Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-lineage-sql-server-integration-services.md
The current scope of support includes the lineage extraction from SSIS packages
On premises SSIS lineage extraction is not supported yet.
+Only source and destination are supported for Microsoft Purview SSIS lineage running from Data FactoryΓÇÖs SSIS Execute Package activity. Transformations under SSIS are not yet supported.
+ ### Supported data stores | Data store | Supported |
Once Execute SSIS Package activity finishes the execution, you can check lineage
- [Lift and shift SQL Server Integration Services workloads to the cloud](/sql/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview) - [Learn about Data lineage in Microsoft Purview](catalog-lineage-user-guide.md)-- [Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
+- [Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
purview How To Workflow Self Service Data Access Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-self-service-data-access-hybrid.md
This guide will take you through the creation and management of self-service dat
:::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/required-fields-for-template-inline.png" alt-text="Screenshot showing the workflow canvas with the start and wait for an approval step, and the Create Task and wait for task completion steps highlighted, and the Assigned to textboxes highlighted within those steps." lightbox="./media/how-to-workflow-self-service-data-access-hybrid/required-fields-for-template-expanded.png"::: > [!NOTE]
- > Please configure the workflow to create self-service policies ONLY for sources supported by Azure purview's policy feature. To see what's supported by policy, check the [Data owner policies documentation](tutorial-data-owner-policies-storage.md).
+ > Please configure the workflow to create self-service policies ONLY for sources supported by Microsft Purview's policy feature. To see what's supported by policy, check the [Data owner policies documentation](tutorial-data-owner-policies-storage.md).
1. You can also modify the template by adding more connectors to suit your organizational needs.
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Microsoft Purview is a unified data governance service that helps you manage and
Microsoft Purview automates data discovery by providing data scanning and classification as a service for assets across your data estate. Metadata and descriptions of discovered data assets are integrated into a holistic map of your data estate. Atop this map, there are purpose-built apps that create environments for data discovery, access management, and insights about your data landscape. + |App |Description | |-|--| |[Data Map](#data-map) | Makes your data meaningful by graphing your data assets, and their relationships, across your data estate. The data map used to discover data and manage access to that data. |
Microsoft Purview automates data discovery by providing data scanning and classi
Microsoft Purview Data Map provides the foundation for data discovery and effective data governance. Microsoft Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Microsoft Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the Microsoft Purview Data Map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.0 APIs. Microsoft Purview Data Map powers the Microsoft Purview Data Catalog and Microsoft Purview data insights as unified experiences within the [Microsoft Purview Studio](https://web.purview.azure.com/resource/).+ For more information, see our [introduction to Data Map](concept-elastic-data-map.md). ## Data Catalog
purview Tutorial Atlas 2 2 Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-atlas-2-2-apis.md
+
+ Title: "How to use new APIs available with Atlas 2.2"
+description: This tutorial describes the new APIs available with Atlas 2.2 upgrade.
+++++ Last updated : 04/18/2021+
+# Customer intent: I can use the new APIs available with Atlas 2.2
++
+# Tutorial: Atlas 2.2 new functionality
+
+In this tutorial, you learn how to programmatically interact with new Atlas 2.2 APIs with Microsoft Purview's data map.
+
+If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+
+## Prerequisites
+
+* To get started, you must have an existing Microsoft Purview account. If you don't have a catalog, see the [quickstart for creating a Microsoft Purview account](create-catalog-portal.md).
+
+* To establish bearer token and to call any Data Plane APIs see [the documentation about how to call REST APIs for Purview Data planes](tutorial-using-rest-apis.md).
+
+## Business Metadata APIs
+
+Business Metadata is a template containing multiple custom attributes (key values) which can be created globally and then applied across multiple typedefs.
+
+### Create a Business metadata with attributes
+
+You can send POST request to the following endpoint
+
+```
+POST {{endpoint}}/api/atlas/v2/types/typedefs
+```
+
+Sample JSON
+
+```json
+ {
+ "businessMetadataDefs": [
+ {
+ "category": "BUSINESS_METADATA",
+ "createdBy": "admin",
+ "updatedBy": "admin",
+ "version": 1,
+ "typeVersion": "1.1",
+ "name": "<Name of Business Metadata>",
+ "description": "",
+ "attributeDefs": [
+ {
+ "name": "<Attribute Name>",
+ "typeName": "string",
+ "isOptional": true,
+ "cardinality": "SINGLE",
+ "isUnique": false,
+ "isIndexable": true,
+ "options": {
+ "maxStrLength": "50",
+ "applicableEntityTypes": "[\"Referenceable\"]"
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Add/Update an attribute to an existing business metadata
+
+You can send PUT request to the following endpoint:
+
+```
+PUT {{endpoint}}/api/atlas/v2/types/typedefs
+```
+
+Sample JSON
+
+```json
+ {
+ "businessMetadataDefs": [
+ {
+ "category": "BUSINESS_METADATA",
+ "createdBy": "admin",
+ "updatedBy": "admin",
+ "version": 1,
+ "typeVersion": "1.1",
+ "name": "<Name of Business Metadata>",
+ "description": "",
+ "attributeDefs": [
+ {
+ "name": "<Attribute Name>",
+ "typeName": "string",
+ "isOptional": true,
+ "cardinality": "SINGLE",
+ "isUnique": false,
+ "isIndexable": true,
+ "options": {
+ "maxStrLength": "500",
+ "applicableEntityTypes": "[\"Referenceable\"]"
+ }
+ },
+ {
+ "name": "<Attribute Name 2>",
+ "typeName": "int",
+ "isOptional": true,
+ "cardinality": "SINGLE",
+ "isUnique": false,
+ "isIndexable": true,
+ "options": {
+ "applicableEntityTypes": "[\"Referenceable\"]"
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+
+### Get Business metadata definition
+
+You can send GET request to the following endpoint
+
+```
+GET {endpoint}}/api/atlas/v2/types/typedef/name/{{Business Metadata Name}}
+```
+
+### Set Business metadata attribute to an entity
+
+You can send POST request to the following endpoint
+
+```
+POST {{endpoint}}/api/atlas/v2/entity/guid/{{GUID}}/businessmetadata?isOverwrite=true
+```
+
+Sample JSON
+
+```json
+{
+ "myBizMetaData1": {
+ "bizAttr1": "I am myBizMetaData1.bizAttr1",
+ "bizAttr2": 123,
+ }
+ }
+```
+
+### Delete Business metadata attribute from an entity
+
+You can send DELETE request to the following endpoint
+
+```
+DELETE {{endpoint}}/api/atlas/v2/entity/guid/{{GUID}}/businessmetadata?isOverwrite=true
+```
+
+Sample JSON
+
+```json
+{
+ "myBizMetaData1": {
+ "bizAttr1": ""
+ }
+}
+```
+
+### Delete Business metadata type definition
+
+You can send DELETE request to the following endpoint
+
+```
+DELETE {{endpoint}}/api/atlas/v2/types/typedef/name/{{Business Metadata Name}}
+```
+
+## Custom Attribute APIs
+
+Custom Attributes are key value pairs which can be directly added to an atlas entity.
+
+### Set Custom Attribute to an entity
+
+You can send POST request to the following endpoint
+
+```
+POST {{endpoint}}/api/atlas/v2/entity
+```
+
+Sample JSON
+
+```json
+{
+ "entity": {
+ "typeName": "azure_datalake_gen2_path",
+ "attributes": {
+
+ "qualifiedName": "<FQN of the asset>",
+ "name": "data6.csv"
+ },
+ "guid": "3ffb28ff-138f-419e-84ba-348b0165e9e0",
+ "customAttributes": {
+ "custAttr1": "attr1",
+ "custAttr2": "attr2"
+ }
+ }
+}
+```
+## Label APIs
+
+Labels are free text tags which can be applied to any atlas entity.
+
+### Set labels to an entity
+
+You can send POST request to the following endpoint
+
+```
+POST {{endpoint}}/api/atlas/v2/entity/guid/{{GUID}}/labels
+```
+
+Sample JSON
+
+```json
+[
+ "label1",
+ "label2"
+]
+```
+
+### Delete labels to an entity
+
+You can send DELETE request to the following endpoint:
+
+```
+DELETE {{endpoint}}/api/atlas/v2/entity/guid/{{GUID}}/labels
+```
+
+Sample JSON
+
+```json
+[
+ "label2"
+]
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Manage data sources](manage-data-sources.md)
+> [Purview Data Plane REST APIs](/rest/api/purview/)
route-server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/overview.md
For frequently asked questions about Azure Route Server, see [Azure Route Server
- [Learn how to configure Azure Route Server](quickstart-configure-route-server-powershell.md) - [Learn how Azure Route Server works with Azure ExpressRoute and Azure VPN](expressroute-vpn-support.md)
+- [Learn module: Introduction to Azure Route Server](/learn/modules/intro-to-azure-route-server)
search Search Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-powershell.md
ms.devlang: powershell Previously updated : 08/03/2021 Last updated : 04/19/2022
Within a service, programmatic creation of content is through [Search Service RE
## Check versions and load modules
-The examples in this article are interactive and require elevated permissions. Azure PowerShell (the **Az** module) must be installed. For more information, see [Install Azure PowerShell](/powershell/azure/).
+The examples in this article are interactive and require elevated permissions. Local PowerShell and the Azure PowerShell (the **Az** module) are required.
-### PowerShell version check (5.1 or later)
+### PowerShell version check
-Local PowerShell must be 5.1 or later, on any supported operating system.
+PowerShell 7.0.6 LTS, PowerShell 7.1.3, or higher is the recommended version of PowerShell for use with the Azure Az PowerShell module on all platforms. [Install the latest version of PowerShell](/powershell/scripting/install/installing-powershell) if you don't have it.
```azurepowershell-interactive $PSVersionTable.PSVersion
If you aren't sure whether **Az** is installed, run the following command as a v
Get-InstalledModule -Name Az ```
-Some systems do not auto-load modules. If you get an error on the previous command, try loading the module, and if that fails, go back to the installation instructions to see if you missed a step.
+Some systems do not auto-load modules. If you get an error on the previous command, try loading the module, and if that fails, go back to the installation [Azure PowerShell installation instructions](/powershell/azure/) to see if you missed a step.
```azurepowershell-interactive Import-Module -Name Az
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Built-in roles include generally available and preview roles.
| [Owner](../role-based-access-control/built-in-roles.md#owner) | (Generally available) Full access to the search resource, including the ability to assign Azure roles. Subscription administrators are members by default. | | [Contributor](../role-based-access-control/built-in-roles.md#contributor) | (Generally available) Same level of access as Owner, minus the ability to assign roles or change authorization options. | | [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>There is no access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). |
-| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role for control plane operations. </p>(Preview) Provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets through [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role is for search service administrators who need to fully manage the service and its content. In preview, this role has been extended to include data plane operations. </br></br>Like Contributor, members of this role cannot make or manage role assignments or change authorization options. Your service must be enabled for the preview for data requests. |
+| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available) This role is identical to the Contributor role and applies to control plane operations. </br></br>(Preview) When you enable the RBAC preview for the data plane, this role also provides full access to all data plane actions on indexes, synonym maps, indexers, data sources, and skillsets as defined by [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role is for search service administrators who need to fully manage both the service and its content. </br></br>Like Contributor, members of this role cannot make or manage role assignments or change authorization options. Your service must be enabled for the preview for data requests. |
| [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | (Preview) Provides full data plane access to content in all indexes on the search service. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. | | [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | (Preview) Provides read-only data plane access to search indexes on the search service. This role is for apps and users who run queries. |
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| - [Azure Active Directory](../../sentinel/connect-azure-active-directory.md) | GA | GA | | - [Azure ADIP](../../sentinel/data-connectors-reference.md#azure-active-directory-identity-protection) | GA | GA | | - [Azure DDoS Protection](../../sentinel/data-connectors-reference.md#azure-ddos-protection) | GA | GA |
-| - [Azure Purview](../../sentinel/data-connectors-reference.md#microsoft-purview) | Public Preview | Not Available |
+| - [Microsft Purview](../../sentinel/data-connectors-reference.md#microsoft-purview) | Public Preview | Not Available |
| - [Microsoft Defender for Cloud](../../sentinel/connect-azure-security-center.md) | GA | GA | | - [Microsoft Defender for IoT](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) | GA | GA |
-| - [Microsoft Insider Risk Management](/azure/sentinel/sentinel-solutions-catalog#domain-solutions) | Public Preview | Not Available |
+| - [Microsoft Insider Risk Management](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
| - [Azure Firewall ](../../sentinel/data-connectors-reference.md#azure-firewall) | GA | GA | | - [Azure Information Protection](../../sentinel/data-connectors-reference.md#azure-information-protection-preview) | Public Preview | Not Available | | - [Azure Key Vault ](../../sentinel/data-connectors-reference.md#azure-key-vault) | Public Preview | Not Available |
security Steps Secure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/steps-secure-identity.md
tags: azuread
# Five steps to securing your identity infrastructure
-If you're reading this document, you're aware of the significance of security. You likely already carry the responsibility for securing your organization. If you need to convince others of the importance of security, send them to read the latest [Microsoft Digital Defense Report](https://www.microsoft.com/security/business/microsoft-digital-defense-report).
+If you're reading this document, you're aware of the significance of security. You likely already carry the responsibility for securing your organization. If you need to convince others of the importance of security, send them to read the latest [Microsoft Digital Defense Report](https://www.microsoft.com/security/business/security-intelligence-report).
This document will help you get a more secure posture using the capabilities of Azure Active Directory by using a five-step checklist to improve your organization's protection against cyber-attacks.
service-fabric Service Fabric Cluster Resource Manager Movement Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-movement-cost.md
this.Partition.ReportMoveCost(MoveCost.Medium);
## Reporting move cost for a partition
-The previous section describes how service replicas or instances report MoveCost themselves. We provided Service Fabric API for reporting MoveCost values on behalf of other partitions. Sometimes service replica or instance can't determine the best MoveCost value by itself, and must rely on other services logic. Reporting MoveCost on behalf of other partitions, alongside [reporting load on behalf of other partitions](service-fabric-cluster-resource-manager-metrics.md#reporting-load-for-a-partition), allows you to completely manage partitions from outside. These APIs eliminate needs for [the Sidecar pattern](https://docs.microsoft.com/azure/architecture/patterns/sidecar), from the perspective of the Cluster Resource Manager.
+The previous section describes how service replicas or instances report MoveCost themselves. We provided Service Fabric API for reporting MoveCost values on behalf of other partitions. Sometimes service replica or instance can't determine the best MoveCost value by itself, and must rely on other services logic. Reporting MoveCost on behalf of other partitions, alongside [reporting load on behalf of other partitions](service-fabric-cluster-resource-manager-metrics.md#reporting-load-for-a-partition), allows you to completely manage partitions from outside. These APIs eliminate needs for [the Sidecar pattern](/azure/architecture/patterns/sidecar), from the perspective of the Cluster Resource Manager.
You can report MoveCost updates for a different partition with the same API call. You need to specify PartitionMoveCostDescription object for each partition that you want to update with new values of MoveCost. The API allows multiple ways to update MoveCost:
via ClusterConfig.json for Standalone deployments or Template.json for Azure hos
- Service Fabric Cluster Resource Manger uses metrics to manage consumption and capacity in the cluster. To learn more about metrics and how to configure them, check out [Managing resource consumption and load in Service Fabric with metrics](service-fabric-cluster-resource-manager-metrics.md). - To learn about how the Cluster Resource Manager manages and balances load in the cluster, check out [Balancing your Service Fabric cluster](service-fabric-cluster-resource-manager-balancing.md).
-[Image1]:./media/service-fabric-cluster-resource-manager-movement-cost/service-most-cost-example.png
+[Image1]:./media/service-fabric-cluster-resource-manager-movement-cost/service-most-cost-example.png
service-health Resource Health Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-overview.md
Different resources have their own criteria for when they report that they are d
## History information > [!NOTE]
-> You can query data up to 1 year using the QueryStartTime parameter of [Events](https://docs.microsoft.com/rest/api/resourcehealth/events/list-by-subscription-id) REST API.
+> You can query data up to 1 year using the QueryStartTime parameter of [Events](/rest/api/resourcehealth/events/list-by-subscription-id) REST API.
You can access up to 30 days of history in the **Health history** section of Resource Health from Azure Portal.
You can also access Resource Health by selecting **All services** and typing **r
Check out these references to learn more about Resource Health: - [Resource types and health checks in Azure Resource Health](resource-health-checks-resource-types.md)-- [Frequently asked questions about Azure Resource Health](resource-health-faq.yml)
+- [Frequently asked questions about Azure Resource Health](resource-health-faq.yml)
site-recovery Azure To Azure How To Enable Zone To Zone Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md
Pricing for Zone to Zone Disaster Recovery is identical to the pricing of Azure
The RTO SLA is the same as that for Site Recovery overall. We promise RTO of up to 2 hours. There is no defined SLA for RPO. **3. Is capacity guaranteed in the secondary zone?**
-The Site Recovery team and Azure capacity management team plan for sufficient infrastructure capacity. When you start a failover, the teams also help ensure VM instances that are protected by Site Recovery will deploy to the target zone. Check [here](https://docs.microsoft.com/azure/site-recovery/azure-to-azure-common-questions#capacity) for more FAQs on Capacity.
+The Site Recovery team and Azure capacity management team plan for sufficient infrastructure capacity. When you start a failover, the teams also help ensure VM instances that are protected by Site Recovery will deploy to the target zone. Check [here](./azure-to-azure-common-questions.md#capacity) for more FAQs on Capacity.
**4. Which operating systems are supported?** Zone to Zone Disaster Recovery supports the same operating systems as Azure to Azure Disaster Recovery. Refer to the support matrix [here](./azure-to-azure-support-matrix.md).
To perform a Disaster Recovery drill, please follow the steps outlined [here](./
To perform a failover and reprotect VMs in the secondary zone, follow the steps outlined [here](./azure-to-azure-tutorial-failover-failback.md).
-To failback to the primary zone, follow the steps outlined [here](./azure-to-azure-tutorial-failback.md).
+To failback to the primary zone, follow the steps outlined [here](./azure-to-azure-tutorial-failback.md).
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Azure Government | US GOV Virginia, US GOV Iowa, US GOV Arizona, US GOV Texas
Germany | Germany Central, Germany Northeast China | China East, China North, China North2, China East2 Brazil | Brazil South
-Restricted Regions reserved for in-country disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers.<br/><br/> To use restricted regions as your primary or recovery region, please get yourselves allowlisted by raising a request [here](https://docs.microsoft.com/troubleshoot/azure/general/region-access-request-process).
+Restricted Regions reserved for in-country disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers.<br/><br/> To use restricted regions as your primary or recovery region, please get yourselves allowlisted by raising a request [here](/troubleshoot/azure/general/region-access-request-process).
>[!NOTE] >
Tags | Supported | User-generated tags on NICs are replicated every 24 hours.
## Next steps - Read [networking guidance](./azure-to-azure-about-networking.md) for replicating Azure VMs.-- Deploy disaster recovery by [replicating Azure VMs](./azure-to-azure-quickstart.md).
+- Deploy disaster recovery by [replicating Azure VMs](./azure-to-azure-quickstart.md).
site-recovery Hyper V Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-tutorial.md
In this tutorial, you learn how to:
This is the third tutorial in a series. It assumes that you have already completed the tasks in the previous tutorials:
-1. [Prepare Azure](https://docs.microsoft.com/azure/site-recovery/tutorial-prepare-azure-for-hyperv)
+1. [Prepare Azure](./tutorial-prepare-azure-for-hyperv.md)
2. [Prepare on-premises Hyper-V](./hyper-v-prepare-on-premises-tutorial.md) ## Select a replication goal
Site Recovery checks that you have one or more compatible Azure storage accounts
## Next steps > [!div class="nextstepaction"]
-> [Run a disaster recovery drill](tutorial-dr-drill-azure.md)
+> [Run a disaster recovery drill](tutorial-dr-drill-azure.md)
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
During a push installation of the Mobility service, the following steps are perf
:::image type="content" source="./media/vmware-physical-mobility-service-install-manual/mobility3.png" alt-text="Screenshot that shows the progress of the installation and the active Proceed to Configuration button when the installation is finished.":::
-1. In **Configuration Server Details**, specify the IP address and passphrase that you configured. To generate the passphrase, follow the steps mentioned [here](https://docs.microsoft.com/azure/site-recovery/vmware-azure-mobility-install-configuration-mgr#prepare-the-installation-files).
+1. In **Configuration Server Details**, specify the IP address and passphrase that you configured. To generate the passphrase, follow the steps mentioned [here](./vmware-azure-mobility-install-configuration-mgr.md#prepare-the-installation-files).
:::image type="content" source="./media/vmware-physical-mobility-service-install-manual/mobility4.png" alt-text="Mobility service registration page.":::
See information about [upgrading the mobility services](upgrade-mobility-service
## Next steps
-[Set up push installation for the Mobility service](vmware-azure-install-mobility-service.md).
+[Set up push installation for the Mobility service](vmware-azure-install-mobility-service.md).
spring-cloud How To Enable Ingress To App Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-ingress-to-app-tls.md
The following section shows you how to enable ingress-to-app SSL/TLS to secure t
- A deployed Azure Spring Cloud instance. Follow our [quickstart on deploying via the Azure CLI](./quickstart.md) to get started. - If you're unfamiliar with ingress-to-app TLS, see the [end-to-end TLS sample](https://github.com/Azure-Samples/spring-boot-secure-communications-using-end-to-end-tls-ssl).-- To securely load the required certificates into Spring Boot apps, you can use [keyvault spring boot starter](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/spring/azure-spring-boot-starter-keyvault-certificates).
+- To securely load the required certificates into Spring Boot apps, you can use [spring-cloud-azure-starter-keyvault-certificates](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/spring-cloud-azure-starter-keyvault-certificates).
### Enable ingress-to-app TLS on an existing app
spring-cloud How To Setup Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-setup-autoscale.md
You can also set Autoscale modes using the Azure CLI. The following commands cre
--condition "tomcat.global.request.total.count > 100 avg 1m where AppName == demo and Deployment == default" ```
-For information on the available metrics, see the [User metrics options](/azure/spring-cloud/concept-metrics#user-metrics-options) section of [Metrics for Azure Spring Cloud](/azure/spring-cloud/concept-metrics).
+For information on the available metrics, see the [User metrics options](./concept-metrics.md#user-metrics-options) section of [Metrics for Azure Spring Cloud](./concept-metrics.md).
## Upgrade to the Standard tier
If you're on the Basic tier and constrained by one or more of these limits, you
## Next steps * [Overview of autoscale in Microsoft Azure](../azure-monitor/autoscale/autoscale-overview.md)
-* [Azure CLI Monitoring autoscale](/cli/azure/monitor/autoscale)
+* [Azure CLI Monitoring autoscale](/cli/azure/monitor/autoscale)
spring-cloud How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-managed-identities.md
The following table shows the mappings between concepts in Managed Identity scop
## Next steps -- [Access Azure Key Vault with managed identities in Spring boot starter](https://github.com/Azure/azure-sdk-for-jav#use-msi--managed-identities) - [Learn more about managed identities for Azure resources](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/managed-identities-azure-resources/overview.md) - [How to use managed identities with Java SDK](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples)
spring-cloud Tutorial Managed Identities Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-managed-identities-mysql.md
The following video describes how to manage secrets using Azure Key Vault.
* [JDK 8](/azure/java/jdk/java-jdk-install) * [Maven 3.0 or above](http://maven.apache.org/install.html)
-* [Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest) or [Azure Cloud Shell](/azure/cloud-shell/overview)
-* An existing Key Vault. If you need to create a Key Vault, you can use the [Azure portal](/azure/key-vault/secrets/quick-create-portal) or [Azure CLI](/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
-* An existing Azure Database for MySQL instance with a database named `demo`. If you need to create an Azure Database for MySQL, you can use the [Azure portal](/azure/mysql/quickstart-create-mysql-server-database-using-azure-portal) or [Azure CLI](/azure/mysql/quickstart-create-mysql-server-database-using-azure-cli)
+* [Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest) or [Azure Cloud Shell](../cloud-shell/overview.md)
+* An existing Key Vault. If you need to create a Key Vault, you can use the [Azure portal](../key-vault/secrets/quick-create-portal.md) or [Azure CLI](/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
+* An existing Azure Database for MySQL instance with a database named `demo`. If you need to create an Azure Database for MySQL, you can use the [Azure portal](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md) or [Azure CLI](../mysql/quickstart-create-mysql-server-database-using-azure-cli.md)
## Create a resource group
az keyvault secret set \
## Set up your Azure Database for MySQL
-To create an Azure Database for MySQL, use the [Azure portal](/azure/mysql/quickstart-create-mysql-server-database-using-azure-portal) or [Azure CLI](/azure/mysql/quickstart-create-mysql-server-database-using-azure-cli)
+To create an Azure Database for MySQL, use the [Azure portal](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md) or [Azure CLI](../mysql/quickstart-create-mysql-server-database-using-azure-cli.md)
Create a database named *demo* for later use.
This [sample](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/m
## Next Steps * [Managed identity to connect Key Vault](tutorial-managed-identities-key-vault.md)
-* [Managed identity to invoke Azure functions](tutorial-managed-identities-functions.md)
-
+* [Managed identity to invoke Azure functions](tutorial-managed-identities-functions.md)
static-web-apps Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/private-endpoint.md
If your app has a private endpoint enabled, the server will respond with a `403`
The default DNS resolution of the static web app still exists and routes to a public IP address. The private endpoint will expose 2 IP Addresses within your VNet, one for the production environment and one for any staging environments. To ensure your client is able to reach the app correctly, make sure your client resolves the hostname of the app to the appropriate IP address of the private endpoint. This is required for the default hostname as well as any custom domains configured for the static web app. This resolution is done automatically if you select a private DNS zone when creating the private endpoint (see example below) and is the recommended solution.
-If you are connecting from on-prem or do not wish to use a private DNS zone, manually configure the DNS records for your application so that requests are routed to the appropriate IP address of the private endpoint. You can find more information on private endpoint DNS resolution [here](https://docs.microsoft.com/azure/private-link/private-endpoint-dns).
+If you are connecting from on-prem or do not wish to use a private DNS zone, manually configure the DNS records for your application so that requests are routed to the appropriate IP address of the private endpoint. You can find more information on private endpoint DNS resolution [here](../private-link/private-endpoint-dns.md).
## Prerequisites
Since your application is no longer publicly available, the only way to access i
## Next steps > [!div class="nextstepaction"]
-> [Learn more about private endpoints](../private-link/private-endpoint-overview.md)
+> [Learn more about private endpoints](../private-link/private-endpoint-overview.md)
storage Blob Upload Function Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger.md
Last updated 3/11/2022
In this tutorial, you'll learn how to upload an image to Azure Blob Storage and process it using Azure Functions and Computer Vision. You'll also learn how to implement Azure Function triggers and bindings as part of this process. Together, these services will analyze an uploaded image that contains text, extract the text out of it, and then store the text in a database row for later analysis or other purposes.
-Azure Blob Storage is Microsoft's massively scalable object storage solution for the cloud. Blob Storage is designed for storing images and documents, streaming media files, managing backup and archive data, and much more. You can read more about Blob Storage on the [overview page](/azure/storage/blobs/storage-blobs-introduction).
+Azure Blob Storage is Microsoft's massively scalable object storage solution for the cloud. Blob Storage is designed for storing images and documents, streaming media files, managing backup and archive data, and much more. You can read more about Blob Storage on the [overview page](./storage-blobs-introduction.md).
-Azure Functions is a serverless computer solution that allows you to write and run small blocks of code as highly scalable, serverless, event driven functions. You can read more about Azure Functions on the [overview page](/azure/azure-functions/functions-overview).
+Azure Functions is a serverless computer solution that allows you to write and run small blocks of code as highly scalable, serverless, event driven functions. You can read more about Azure Functions on the [overview page](../../azure-functions/functions-overview.md).
In this tutorial, you will learn how to:
If you're not going to continue to use this application, you can delete the reso
2) Select the **Delete resource group** button at the top of the resource group overview page. 3) Enter the resource group name *msdocs-storage-function* in the confirmation dialog. 4) Select delete.
-The process to delete the resource group may take a few minutes to complete.
+The process to delete the resource group may take a few minutes to complete.
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
For performance issues and considerations, see [SSH File Transfer Protocol (SFTP
- SSH commands, that are not SFTP, are not supported. -- West Europe will temporarily still require registration of the SFTP preview feature.- ## Troubleshooting - To resolve the `Failed to update SFTP settings for account 'accountname'. Error: The value 'True' is not allowed for property isSftpEnabled.` error, ensure that the following pre-requisites are met at the storage account level:
For performance issues and considerations, see [SSH File Transfer Protocol (SFTP
- The account needs to be a general-purpose v2 and premium block blob accounts. - The account needs to have hierarchical namespace enabled on it.
-
- - Accounts in West Europe will temporarily require the customer's subscription to be signed up for the preview. Request to join via 'Preview features' in the Azure portal. Requests are automatically approved.
- To resolve the `Home Directory not accessible error.` error, check that:
storage Storage Blob Static Website How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website-how-to.md
Previously updated : 03/04/2020 Last updated : 04/19/2022
Static website hosting is a feature that you have to enable on the storage accou
### [Portal](#tab/azure-portal)
-1. Sign in to the [Azure portal](https://portal.azure.com/) to get started.
+1. Sign-in to the [Azure portal](https://portal.azure.com/) to get started.
-2. Locate your storage account and display the account overview.
+2. Locate your storage account and select it to display the account's **Overview** pane.
-3. Select **Static website** to display the configuration page for static websites.
+3. In the **Overview** pane, select the **Capabilities** tab. Next, select **Static website** to display the configuration page for the static website.
+
+ :::image type="content" source="media/storage-blob-static-website-how-to/select-website-configuration-sml.png" alt-text="Image showing how to access the Static website configuration page within the Azure portal" lightbox="media/storage-blob-static-website-how-to/select-website-configuration-lrg.png":::
4. Select **Enabled** to enable static website hosting for the storage account.
Static website hosting is a feature that you have to enable on the storage accou
The default error page is displayed when a user attempts to navigate to a page that does not exist in your static website.
-7. Click **Save**. The Azure portal now displays your static website endpoint.
+7. Click **Save** to finish the static site configuration.
+
+ :::image type="content" source="media/storage-blob-static-website-how-to/select-website-properties-sml.png" alt-text="Image showing how to set the Static website properties within the Azure portal" lightbox="media/storage-blob-static-website-how-to/select-website-properties-lrg.png":::
- ![Enable static website hosting for a storage account](media/storage-blob-static-website-host/enable-static-website-hosting.png)
+8. A confirmation message is displayed. Your static website endpoints and other configuration information are shown within the **Overview** pane.
+
+ :::image type="content" source="media/storage-blob-static-website-how-to/website-properties-sml.png" alt-text="Image showing the Static website properties within the Azure portal" lightbox="media/storage-blob-static-website-how-to/website-properties-lrg.png":::
### [Azure CLI](#tab/azure-cli)
You can enable static website hosting by using the Azure PowerShell module.
### [Portal](#tab/azure-portal)
-These instructions show you how to upload files by using the version of Storage Explorer that appears in the Azure portal. However, you can also use the version of [Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) that runs outside of the Azure portal. You could use [AzCopy](../common/storage-use-azcopy-v10.md), PowerShell, CLI, or any custom application that can upload files to the **$web** container of your account. For a step-by-step tutorial that uploads files by using Visual Studio code, see [Tutorial: Host a static website on Blob Storage](./storage-blob-static-website-host.md).
+The following instructions show you how to upload files by using the Azure portal. You could also use [AzCopy](../common/storage-use-azcopy-v10.md), PowerShell, CLI, or any custom application that can upload files to the **$web** container of your account. For a step-by-step tutorial that uploads files by using Visual Studio code, see [Tutorial: Host a static website on Blob Storage](./storage-blob-static-website-host.md).
+
+1. In the Azure portal, navigate to the storage account containing your static website. Select **Containers** in the left navigation pane to display the list of containers.
-1. Select **Storage Explorer (preview)**.
+2. In the **Containers** pane, select the **$web** container to open the container's **Overview** pane.
-2. Expand the **BLOB CONTAINERS** node, and then select the **$web** container.
+ :::image type="content" source="media/storage-blob-static-website-how-to/web-containers-sml.png" alt-text="Image showing where to locate the website storage container in Azure portal" lightbox="media/storage-blob-static-website-how-to/web-containers-lrg.png":::
-3. Choose the **Upload** button to upload files.
+3. In the **Overview** pane, select the **Upload** icon to open the **Upload blob** pane. Next, select the **Files** field within the **Upload blob** pane to open the file browser. Navigate to the file you want to upload, select it, and then select **Open** to populate the **Files** field. Optionally, select the **Overwrite if files already exist** checkbox.
- ![Upload files](media/storage-blob-static-website/storage-blob-static-website-upload.png)
+ :::image type="content" source="media/storage-blob-static-website-how-to/file-upload-sml.png" alt-text="Image showing how to upload files to the static website storage container" lightbox="media/storage-blob-static-website-how-to/file-upload-lrg.png":::
-4. If you intend for the browser to display the contents of file, make sure that the content type of that file is set to `text/html`.
+4. If you intend for the browser to display the contents of file, make sure that the content type of that file is set to `text/html`. To verify this, select the name of the blob you uploaded in the previous step to open its **Overview** pane. Ensure that the value is set within the **CONTENT-TYPE** property field.
- ![Check content types](media/storage-blob-static-website/storage-blob-static-website-content-type.png)
+ :::image type="content" source="media/storage-blob-static-website-how-to/blob-content-type-sml.png" alt-text="Image showing how to verify blob content types" lightbox="media/storage-blob-static-website-how-to/blob-content-type-lrg.png":::
> [!NOTE]
- > Storage Explorer automatically sets this property to `text/html` for commonly recognized extensions such as `.html`. However, in some cases, you'll have to set this yourself. If you don't set this property to `text/html`, the browser will prompt users to download the file instead of rendering the contents. To set this property, right-click the file, and then click **Properties**.
+ > This property is automatically set to `text/html` for commonly recognized extensions such as `.html`. However, in some cases, you'll have to set this yourself. If you don't set this property to `text/html`, the browser will prompt users to download the file instead of rendering the contents. This property can be set in the previous step.
### [Azure CLI](#tab/azure-cli)
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Last updated 04/19/2022 + # Azure Storage redundancy
Azure Storage always stores multiple copies of your data so that it is protected
When deciding which redundancy option is best for your scenario, consider the tradeoffs between lower costs and higher availability. The factors that help determine which redundancy option you should choose include: -- How your data is replicated in the primary region-- Whether your data is replicated to a second region that is geographically distant to the primary region, to protect against regional disasters-- Whether your application requires read access to the replicated data in the secondary region if the primary region becomes unavailable for any reason
+- How your data is replicated in the primary region.
+- Whether your data is replicated to a second region that is geographically distant to the primary region, to protect against regional disasters (geo-replication).
+- Whether your application requires read access to the replicated data in the secondary region if the primary region becomes unavailable for any reason (geo-replication with read access).
> [!NOTE]
-> The features and regional availability described in this article are also available to accounts that have a hierarchical namespace.
+> The features and regional availability described in this article are also available to accounts that have a hierarchical namespace (Azure Blob storage).
+
+The services that comprise Azure Storage are managed through a common Azure resource called a *storage account*. The storage account represents a shared pool of storage that can be used to deploy storage resources such as blob containers (Blob Storage), file shares (Azure Files), tables (Table Storage), or queues (Queue Storage). For more information about Azure Storage accounts, see [Storage account overview](storage-account-overview.md).
+
+The redundancy setting for a storage account is shared for all storage services exposed by that account. All storage resources deployed in the same storage account have the same redundancy setting. You may want to isolate different types of resources in separate storage accounts if they have different redundancy requirements.
## Redundancy in the primary region
Data in an Azure Storage account is always replicated three times in the primary
- **Locally redundant storage (LRS)** copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option, but is not recommended for applications requiring high availability or durability. - **Zone-redundant storage (ZRS)** copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability, Microsoft recommends using ZRS in the primary region, and also replicating to a secondary region.
-> [!NOTE]
+> [!NOTE]
> Microsoft recommends using ZRS in the primary region for Azure Data Lake Storage Gen2 workloads. ### Locally-redundant storage
-Locally redundant storage (LRS) replicates your data three times within a single data center in the primary region. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.
+Locally redundant storage (LRS) replicates your storage account three times within a single data center in the primary region. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.
LRS is the lowest-cost redundancy option and offers the least durability compared to other options. LRS protects your data against server rack and drive failures. However, if a disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS may be lost or unrecoverable. To mitigate this risk, Microsoft recommends using [zone-redundant storage](#zone-redundant-storage) (ZRS), [geo-redundant storage](#geo-redundant-storage) (GRS), or [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS).
LRS is a good choice for the following scenarios:
- If your application stores data that can be easily reconstructed if data loss occurs, you may opt for LRS. - If your application is restricted to replicating data only within a country or region due to data governance requirements, you may opt for LRS. In some cases, the paired regions across which the data is geo-replicated may be in another country or region. For more information on paired regions, see [Azure regions](https://azure.microsoft.com/regions/).
+- If your scenario is using Azure unmanaged disks. While it is possible to create a storage account for Azure unmanaged disks that uses GRS, it is not recommended due to potential issues with consistency over asynchronous geo-replication.
### Zone-redundant storage
-Zone-redundant storage (ZRS) replicates your Azure Storage data synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS offers durability for Azure Storage data objects of at least 99.9999999999% (12 9's) over a given year.
+Zone-redundant storage (ZRS) replicates your storage account synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year.
-With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. No remounting of Azure file shares from the connected clients is required. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS re-pointing. These updates may affect your application if you access data before the updates have completed. When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
+With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS re-pointing. These updates may affect your application if you access data before the updates have completed. When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
A write request to a storage account that is using ZRS happens synchronously. The write operation returns successfully only after the data is written to all replicas across the three availability zones. Microsoft recommends using ZRS in the primary region for scenarios that require high availability. ZRS is also recommended for restricting replication of data to within a country or region to meet data governance requirements.
+Microsoft recommends using ZRS for Azure Files workloads. If a zone becomes unavailable, no remounting of Azure file shares from the connected clients is required.
+ The following diagram shows how your data is replicated across availability zones in the primary region with ZRS: :::image type="content" source="media/storage-redundancy/zone-redundant-storage.png" alt-text="Diagram showing how data is replicated in the primary region with ZRS"::: ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily unavailable. However, ZRS by itself may not protect your data against a regional disaster where multiple zones are permanently affected. For protection against regional disasters, Microsoft recommends using [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS), which uses ZRS in the primary region and also geo-replicates your data to a secondary region.
-The following table shows which types of storage accounts support ZRS in which regions:
+The Archive tier for Blob Storage is not currently supported for ZRS accounts. Unmanaged disks don't support ZRS or GZRS.
+
+For more information about which regions support ZRS, see [Azure regions with availability zones](../../availability-zones/az-overview.md#azure-regions-with-availability-zones).
+
+#### Standard storage accounts
+
+ZRS is supported for all Azure Storage services through standard general-purpose v2 storage accounts, including:
+
+- Azure Blob storage (hot and cool block blobs, non-disk page blobs)
+- Azure Files (all standard tiers: transaction optimized, hot, and cool)
+- Azure Table storage
+- Azure Queue storage
++
+#### Premium block blob accounts
+
+ZRS is supported for premium block blobs accounts. For more information about premium block blobs, see [Premium block blob storage accounts](../blobs/storage-blob-block-blob-premium.md).
+
+Premium block blobs are available in a subset of Azure regions:
+
+- (Asia Pacific) Australia East
+- (Asia Pacific) East Asia
+- (Asia Pacific) Japan East
+- (Asia Pacific) Southeast Asia
+- (Europe) France Central
+- (Europe) North Europe
+- (Europe) West Europe
+- (Europe) UK South
+- (North America) East US
+- (North America) East US 2
+- (North America) West US 2
+- (South America) Brazil South
-| Storage account type | Supported regions | Supported services |
-|--|--|--|
-| General-purpose v2<sup>1</sup> | (Africa) South Africa North<br /> (Asia Pacific) Southeast Asia<br /> (Asia Pacific) Australia East<br /> (Asia Pacific) Japan East<br /> (Asia Pacific) Central India<br /> (Canada) Canada Central<br /> (Europe) North Europe<br /> (Europe) West Europe<br /> (Europe) France Central<br /> (Europe) Germany West Central<br /> (Europe) UK South<br /> (South America) Brazil South<br /> (US) Central US<br /> (US) East US<br /> (US) East US 2<br /> (US) South Central US<br /> (US) West US 2 | Block blobs<br /> Page blobs<sup>2</sup><br /> File shares (standard)<br /> Tables<br /> Queues<br /> |
-| Premium block blobs<sup>1</sup> | (Asia) Southeast Asia<br />(Asia Pacific) Australia East<br /> Brazil South<br /> Europe North<br /> Europe West<br /> France Central <br /> Japan East<br /> UK South <br /> US East <br /> US East 2 <br /> US West 2| Premium block blobs only |
-| Premium file shares | Asia Southeast<br /> Australia East<br /> Brazil South<br /> Europe North<br /> Europe West<br /> France Central <br /> Japan East<br /> UK South <br /> US East <br /> US East 2 <br /> US West 2 | Premium files shares only |
+#### Premium file share accounts
-<sup>1</sup> The archive tier is not currently supported for ZRS accounts.<br />
-<sup>2</sup> Azure unmanaged disks should also use LRS. It is possible to create a storage account for Azure unmanaged disks that uses GRS, but it is not recommended due to potential issues with consistency over asynchronous geo-replication. Unmanaged disks don't support ZRS or GZRS.
+ZRS is supported for premium file shares (Azure Files) through the `FileStorage` storage account kind.
-For information about which regions support ZRS, see **Services support by region** in [What are Azure Availability Zones?](../../availability-zones/az-overview.md).
## Redundancy in a secondary region
With GRS or GZRS, the data in the secondary region isn't available for read or w
If the primary region becomes unavailable, you can choose to fail over to the secondary region. After the failover has completed, the secondary region becomes the primary region, and you can again read and write data. For more information on disaster recovery and to learn how to fail over to the secondary region, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md). > [!IMPORTANT]
-> Because data is replicated to the secondary region asynchronously, a failure that affects the primary region may result in data loss if the primary region cannot be recovered. The interval between the most recent writes to the primary region and the last write to the secondary region is known as the recovery point objective (RPO). The RPO indicates the point in time to which data can be recovered. Azure Storage typically has an RPO of less than 15 minutes, although there's currently no SLA on how long it takes to replicate data to the secondary region.
+> Because data is replicated to the secondary region asynchronously, a failure that affects the primary region may result in data loss if the primary region cannot be recovered. The interval between the most recent writes to the primary region and the last write to the secondary region is known as the recovery point objective (RPO). The RPO indicates the point in time to which data can be recovered. The Azure Storage platform typically has an RPO of less than 15 minutes, although there's currently no SLA on how long it takes to replicate data to the secondary region.
### Geo-redundant storage
-Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region. GRS offers durability for Azure Storage data objects of at least 99.99999999999999% (16 9's) over a given year.
+Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region. GRS offers durability for storage resources of at least 99.99999999999999% (16 9's) over a given year.
A write operation is first committed to the primary location and replicated using LRS. The update is then replicated asynchronously to the secondary region. When data is written to the secondary location, it's also replicated within that location using LRS.
The following diagram shows how your data is replicated with GZRS or RA-GZRS:
:::image type="content" source="media/storage-redundancy/geo-zone-redundant-storage.png" alt-text="Diagram showing how data is replicated with GZRS or RA-GZRS":::
-Only general-purpose v2 storage accounts support GZRS and RA-GZRS. For more information about storage account types, see [Azure storage account overview](storage-account-overview.md). GZRS and RA-GZRS support block blobs, page blobs (except for VHD disks), files, tables, and queues.
+Only standard general-purpose v2 storage accounts support GZRS. GZRS is supported by all of the Azure Storage services, including:
-GZRS and RA-GZRS are supported in the following regions:
+- Azure Blob storage (hot and cool block blobs, non-disk page blobs)
+- Azure Files (all standard tiers: transaction optimized, hot, and cool)
+- Azure Table storage
+- Azure Queue storage
-- (Asia Pacific) Asia East-- (Asia Pacific) Asia Southeast-- (Asia Pacific) Australia East-- (Asia Pacific) Japan East-- (Canada) Canada Central-- (Europe) North Europe-- (Europe) West Europe-- (Europe) France Central-- (Europe) Norway East-- (Europe) UK South-- (South America) Brazil South-- (US) US Central-- (US) US East-- (US) US East 2-- (US) US Government East-- (US) US South Central-- (US) US West 2-- (US) US West 3-
-For information on pricing, see pricing details for [Blobs](https://azure.microsoft.com/pricing/details/storage/blobs), [Files](https://azure.microsoft.com/pricing/details/storage/files/), [Queues](https://azure.microsoft.com/pricing/details/storage/queues/), and [Tables](https://azure.microsoft.com/pricing/details/storage/tables/).
## Read access to data in the secondary region Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. However, that data is available to be read only if the customer or Microsoft initiates a failover from the primary to secondary region. When you enable read access to the secondary region, your data is available to be read at all times, including in a situation where the primary region becomes unavailable. For read access to the secondary region, enable read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). > [!NOTE]
-> Azure Files does not support read-access geo-redundant storage (RA-GRS) and read-access geo-zone-redundant storage (RA-GZRS).
+> Azure Files does not support read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS).
### Design your applications for read access to the secondary
You can query the value of the **Last Sync Time** property using Azure PowerShel
## Summary of redundancy options
-The tables in the following sections summarize the redundancy options available for Azure Storage
+The tables in the following sections summarize the redundancy options available for Azure Storage.
### Durability and availability parameters
The following table describes key parameters for each redundancy option:
| Parameter | LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS | |:-|:-|:-|:-|:-| | Percent durability of objects over a given year | at least 99.999999999% (11 9's) | at least 99.9999999999% (12 9's) | at least 99.99999999999999% (16 9's) | at least 99.99999999999999% (16 9's) |
-| Availability for read requests | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) for GRS<br /><br />At least 99.99% (99.9% for Cool or Archive access tiers) for RA-GRS | At least 99.9% (99% for Cool or Archive access tiers) for GZRS<br /><br />At least 99.99% (99.9% for Cool or Archive access tiers) for RA-GZRS |
+| Availability for read requests | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) for GRS<br/><br/>At least 99.99% (99.9% for Cool or Archive access tiers) for RA-GRS | At least 99.9% (99% for Cool or Archive access tiers) for GZRS<br/><br/>At least 99.99% (99.9% for Cool or Archive access tiers) for RA-GZRS |
| Availability for write requests | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) | | Number of copies of data maintained on separate nodes | Three copies within a single region | Three copies across separate availability zones within a single region | Six copies total, including three in the primary region and three in the secondary region | Six copies total, including three across separate availability zones in the primary region and three locally redundant copies in the secondary region |
The following table shows which redundancy options are supported by each Azure S
| LRS | ZRS | GRS | RA-GRS | GZRS | RA-GZRS | |||||||
-| Blob storage <br />Queue storage <br />Table storage <br />Azure Files<sup>1,</sup><sup>2</sup> <br />Azure managed disks | Blob storage <br />Queue storage <br />Table storage <br />Azure Files<sup>1,</sup><sup>2</sup> <br />Azure managed disks<sup>3</sup> | Blob storage <br />Queue storage <br />Table storage <br />Azure Files<sup>1</sup> | Blob storage <br />Queue storage <br />Table storage <br /> | Blob storage <br />Queue storage <br />Table storage <br />Azure Files<sup>1</sup> | Blob storage <br />Queue storage <br />Table storage <br /> |
+| Blob storage <br/>Queue storage <br/>Table storage <br/>Azure Files<sup>1,</sup><sup>2</sup> <br/>Azure managed disks | Blob storage <br/>Queue storage <br/>Table storage <br/>Azure Files<sup>1,</sup><sup>2</sup> <br/>Azure managed disks<sup>3</sup> | Blob storage <br/>Queue storage <br/>Table storage <br/>Azure Files<sup>1</sup> | Blob storage <br/>Queue storage <br/>Table storage <br/> | Blob storage <br/>Queue storage <br/>Table storage <br/>Azure Files<sup>1</sup> | Blob storage <br/>Queue storage <br/>Table storage <br/> |
-<sup>1</sup> Standard file shares are supported on LRS and ZRS. Standard file shares are supported on GRS and GZRS as long as they are less than or equal to five TiB in size.<br />
-<sup>2</sup> Premium file shares are supported on LRS and ZRS.<br />
-<sup>3</sup> ZRS managed disks have some limitations, see the [Limitations](../../virtual-machines/disks-redundancy.md#limitations) section of the redundancy options for managed disks article for details.<br />
+<sup>1</sup> Standard file shares are supported on LRS and ZRS. Standard file shares are supported on GRS and GZRS as long as they are less than or equal to five TiB in size.<br/>
+<sup>2</sup> Premium file shares are supported on LRS and ZRS.<br/>
+<sup>3</sup> ZRS managed disks have certain limitations. See the [Limitations](../../virtual-machines/disks-redundancy.md#limitations) section of the redundancy options for managed disks article for details.<br/>
### Supported storage account types
-The following table shows which redundancy options are supported by each type of storage account. For information for storage account types, see [Storage account overview](storage-account-overview.md).
+The following table shows which redundancy options are supported for each type of storage account. For information for storage account types, see [Storage account overview](storage-account-overview.md).
-| LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS |
-|:-|:-|:-|:-|
-| General-purpose v2<sup>1</sup><br /> General-purpose v1<br /> Premium block blob<sup>1</sup><br /> Legacy blob<br /> Premium file shares | General-purpose v2<sup>1</sup><br /> Premium block blobs<sup>1</sup><br /> Premium file shares | General-purpose v2<sup>1</sup><br /> General-purpose v1<br /> Legacy blob | General-purpose v2<sup>1</sup> |
+| Storage account types | LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS |
+|:-|:-|:-|:-|:-|
+| **Recommended** | Standard general-purpose v2 (`StorageV2`)<sup>1</sup><br/><br/> Premium block blobs (`BlockBlobStorage`)<sup>1</sup><br/><br/> Premium file shares (`FileStorage`) | Standard general-purpose v2 (`StorageV2`)<sup>1</sup><br/><br/> Premium block blobs (`BlockBlobStorage`)<sup>1</sup><br/><br/> Premium file shares (`FileStorage`) | Standard general-purpose v2 (`StorageV2`)<sup>1</sup> | Standard general-purpose v2 (`StorageV2`)<sup>1</sup> |
+| **Legacy** | Standard general-purpose v1 (`Storage`)<br/><br/> Legacy blob (`BlobStorage`) | N/A | Standard general-purpose v1 (`Storage`)<br/><br/> Legacy blob (`BlobStorage`) | N/A |
<sup>1</sup> Accounts of this type with a hierarchical namespace enabled also support the specified redundancy option.
-All data for all storage accounts is copied according to the redundancy option for the storage account. Objects including block blobs, append blobs, page blobs, queues, tables, and files are copied. Data in all tiers, including the archive tier, is copied. For more information about blob tiers, see [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md).
+All data for all storage accounts is copied according to the redundancy option for the storage account. Objects including block blobs, append blobs, page blobs, queues, tables, and files are copied.
+
+Data in all tiers, including the Archive tier, is copied. For more information about blob tiers, see [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md).
For pricing information for each redundancy option, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/).
Azure Storage regularly verifies the integrity of data stored using cyclic redun
## See also -- [Check the Last Sync Time property for a storage account](last-sync-time-get.md) - [Change the redundancy option for a storage account](redundancy-migration.md)-- [Use geo-redundancy to design highly available applications](geo-redundant-design.md)-- [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md)
+- Geo replication (GRS/GZRS/RA-GRS/RA-GZRS)
+ - [Check the Last Sync Time property for a storage account](last-sync-time-get.md)
+ - [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md)
+- Pricing
+ - [Blob Storage](https://azure.microsoft.com/pricing/details/storage/blobs)
+ - [Azure Files](https://azure.microsoft.com/pricing/details/storage/files/)
+ - [Table Storage](https://azure.microsoft.com/pricing/details/storage/tables/)
+ - [Queue Storage](https://azure.microsoft.com/pricing/details/storage/queues/)
storage Storage Use Azcopy Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-configure.md
Use the `azcopy env` to check the current value of this variable. If the value i
By default, AzCopy log level is set to `INFO`. If you would like to reduce the log verbosity to save disk space, overwrite this setting by using the ``--log-level`` option.
-Available log levels are: `NONE`, `DEBUG`, `INFO`, `WARNING`, `ERROR`, `PANIC`, and `FATAL`.
+Available log levels are: `DEBUG`, `INFO`, `WARNING`, `ERROR`, and `NONE`.
## Remove plan and log files
storage Storage Use Azcopy Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-files.md
You can synchronize the contents of a local file system with a file share or syn
> Currently, this scenario is supported for accounts that have enabled hierarchical namespace via the blob endpoint. > [!Warning]
-> AzCopy sync is supported but not fully recommended for Azure Files. AzCopy sync doesn't support differential copies at scale, and some file fidelity might be lost. To learn more, see [Migrate to Azure file shares](https://docs.microsoft.com/azure/storage/files/storage-files-migration-overview#file-copy-tools).
+> AzCopy sync is supported but not fully recommended for Azure Files. AzCopy sync doesn't support differential copies at scale, and some file fidelity might be lost. To learn more, see [Migrate to Azure file shares](../files/storage-files-migration-overview.md#file-copy-tools).
### Guidelines
See these articles to configure settings, optimize performance, and troubleshoot
- [AzCopy configuration settings](storage-ref-azcopy-configuration-settings.md) - [Optimize the performance of AzCopy](storage-use-azcopy-optimize.md)-- [Troubleshoot AzCopy V10 issues in Azure Storage by using log files](storage-use-azcopy-configure.md)
+- [Troubleshoot AzCopy V10 issues in Azure Storage by using log files](storage-use-azcopy-configure.md)
storage Files Smb Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md
Azure Files exposes the following settings:
- **SMB versions**: Which versions of SMB are allowed. Supported protocol versions are SMB 3.1.1, SMB 3.0, and SMB 2.1. By default, all SMB versions are allowed, although SMB 2.1 is disallowed if "require secure transit" is enabled, since SMB 2.1 does not support encryption in transit. - **Authentication methods**: Which SMB authentication methods are allowed. Supported authentication methods are NTLMv2 and Kerberos. By default, all authentication methods are allowed. Removing NTLMv2 disallows using the storage account key to mount the Azure file share.-- **Kerberos ticket encryption**: Which encryption algorithms are allowed. Supported encryption algorithms are RC4-HMAC and AES-256.
+- **Kerberos ticket encryption**: Which encryption algorithms are allowed. Supported encryption algorithms are AES-256 (recommended) and RC4-HMAC.
- **SMB channel encryption**: Which SMB channel encryption algorithms are allowed. Supported encryption algorithms are AES-256-GCM, AES-128-GCM, and AES-128-CCM. # [Portal](#tab/azure-portal)
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser
# Import AzFilesHybrid module Import-Module -Name AzFilesHybrid
-# Login with an Azure AD credential that has either storage account owner or contributer Azure role assignment
+# Login with an Azure AD credential that has either storage account owner or contributor Azure role assignment
# If you are logging into an Azure environment other than Public (ex. AzureUSGovernment) you will need to specify that. # See https://docs.microsoft.com/azure/azure-government/documentation-government-get-started-connect-with-ps # for more information.
$StorageAccountName = "<storage-account-name-here>"
$DomainAccountType = "<ComputerAccount|ServiceLogonAccount>" # Default is set as ComputerAccount # If you don't provide the OU name as an input parameter, the AD identity that represents the storage account is created under the root directory. $OuDistinguishedName = "<ou-distinguishedname-here>"
-# Specify the encryption agorithm used for Kerberos authentication. Default is configured as "'RC4','AES256'" which supports both 'RC4' and 'AES256' encryption.
+# Specify the encryption algorithm used for Kerberos authentication. AES256 is recommended. Default is configured as "'RC4','AES256'" which supports both 'RC4' and 'AES256' encryption.
$EncryptionType = "<AES256|RC4|AES256,RC4>" # Select the target subscription for the current session
Join-AzStorageAccount `
-OrganizationalUnitDistinguishedName $OuDistinguishedName ` -EncryptionType $EncryptionType
-#Run the command below if you want to enable AES 256 authentication. If you plan to use RC4, you can skip this step.
+#Run the command below to enable AES256 encryption. If you plan to use RC4, you can skip this step.
Update-AzStorageAccountAuthForAES256 -ResourceGroupName $ResourceGroupName -StorageAccountName $StorageAccountName #You can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD configuration with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version. For more details on the checks performed in this cmdlet, see Azure Files Windows troubleshooting guide.
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
Title: Use Azure AD Domain Services to authorize access to file data over SMB description: Learn how to enable identity-based authentication over Server Message Block (SMB) for Azure Files through Azure Active Directory Domain Services. Your domain-joined Windows virtual machines (VMs) can then access Azure file shares by using Azure AD credentials. - Previously updated : 01/14/2022 Last updated : 04/08/2022
If you are new to Azure file shares, we recommend reading our [planning guide](storage-files-planning.md) before reading the following series of articles. > [!NOTE]
-> Azure Files supports Kerberos authentication with Azure AD DS with RC4-HMAC and AES-256 encryption.
+> Azure Files supports Kerberos authentication with Azure AD DS with RC4-HMAC and AES-256 encryption. We recommend using AES-256.
> > Azure Files supports authentication for Azure AD DS with full synchronization with Azure AD. If you have enabled scoped synchronization in Azure AD DS which only sync a limited set of identities from Azure AD, authentication and authorization is not supported.
The following diagram illustrates the end-to-end workflow for enabling Azure AD
![Diagram showing Azure AD over SMB for Azure Files workflow](media/storage-files-active-directory-enable/azure-active-directory-over-smb-workflow.png)
-## (Optional) Use AES 256 encryption
+## Recommended: Use AES-256 encryption
-By default, Azure AD DS authentication uses Kerberos RC4 encryption. To use Kerberos AES256 instead, follow these steps:
+By default, Azure AD DS authentication uses Kerberos RC4 encryption. We recommend configuring it to use Kerberos AES-256 encryption instead by following these steps:
As an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions), open the Azure cloud shell.
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
If you are new to Azure file shares, we recommend reading our [planning guide](s
- AD DS Identities used for Azure Files on-premises AD DS authentication must be synced to Azure AD or use a default share-level permission. Password hash synchronization is optional. - Supports Azure file shares managed by Azure File Sync.-- Supports Kerberos authentication with AD with RC4-HMAC and [AES 256 encryption](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption). AES 256 encryption support is currently limited to storage accounts with names <= 15 characters in length. AES 128 Kerberos encryption is not yet supported.
+- Supports Kerberos authentication with AD with [AES 256 encryption](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption) (recommended) and RC4-HMAC. AES 256 encryption support is currently limited to storage accounts with names <= 15 characters in length. AES 128 Kerberos encryption is not yet supported.
- Supports single sign-on experience. - Only supported on clients running on OS versions newer than Windows 7 or Windows Server 2008 R2. - Only supported against the AD forest that the storage account is registered to. You can only access Azure file shares with the AD DS credentials from a single forest by default. If you need to access your Azure file share from a different forest, make sure that you have the proper forest trust configured, see the [FAQ](storage-files-faq.md#ad-ds--azure-ad-ds-authentication) for details.
storage Storage Files Netapp Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-netapp-comparison.md
Most workloads that require cloud file storage work well on either Azure Files o
| Redundancy | Premium<br><ul><li>LRS</li><li>ZRS</li></ul><br>Standard<br><ul><li>LRS</li><li>ZRS</li><li>GRS</li><li>GZRS</li></ul><br> To learn more, see [redundancy](./storage-files-planning.md#redundancy). | All tiers<br><ul><li>Built-in local HA</li><li>[Cross-region replication](../../azure-netapp-files/cross-region-replication-introduction.md)</li></ul> | | Service-Level Agreement (SLA)<br><br> Note that SLAs for Azure Files and Azure NetApp Files are calculated differently. | [SLA for Azure Files](https://azure.microsoft.com/support/legal/sla/storage/) | [SLA for Azure NetApp Files](https://azure.microsoft.com/support/legal/sla/netapp) | | Identity-Based Authentication and Authorization | SMB<br><ul><li>Active Directory Domain Services (AD DS)</li><li>Azure Active Directory Domain Services (Azure AD DS)</li></ul><br> Note that identify-based authentication is only supported when using SMB protocol. To learn more, see [FAQ](./storage-files-faq.md#security-authentication-and-access-control). | SMB<br><ul><li>Active Directory Domain Services (AD DS)</li><li>Azure Active Directory Domain Services (Azure AD DS)</li></ul><br> NFS/SMB dual protocol<ul><li>ADDS/LDAP integration</li></ul><br>NFSv3/NFSv4.1<ul><li>ADDS/LDAP integration with NFS extended groups [(preview)](../../azure-netapp-files/configure-ldap-extended-groups.md)</li></ul><br> To learn more, see [Azure NetApp Files NFS FAQ](../../azure-netapp-files/faq-nfs.md) and [Azure NetApp Files SMB FAQ](../../azure-netapp-files/faq-smb.md). |
-| Encryption | All protocols<br><ul><li>Encryption at rest (AES 256) with customer or Microsoft-managed keys</li></ul><br>SMB<br><ul><li>Kerberos encryption using AES 256 or RC4-HMAC</li><li>Encryption in transit</li></ul><br>REST<br><ul><li>Encryption in transit</li></ul><br> To learn more, see [Security and networking](files-nfs-protocol.md#security-and-networking). | All protocols<br><ul><li>Encryption at rest (AES 256) with Microsoft-managed keys </li></ul><br>SMB<ul><li>Encryption in transit using AES-CCM (SMB 3.0) and AES-GCM (SMB 3.1.1)</li></ul><br>NFS 4.1<ul><li>Encryption in transit using Kerberos with AES 256</li></ul><br> To learn more, see [security FAQ](../../azure-netapp-files/faq-security.md). |
+| Encryption | All protocols<br><ul><li>Encryption at rest (AES-256) with customer or Microsoft-managed keys</li></ul><br>SMB<br><ul><li>Kerberos encryption using AES-256 (recommended) or RC4-HMAC</li><li>Encryption in transit</li></ul><br>REST<br><ul><li>Encryption in transit</li></ul><br> To learn more, see [Security and networking](files-nfs-protocol.md#security-and-networking). | All protocols<br><ul><li>Encryption at rest (AES-256) with Microsoft-managed keys </li></ul><br>SMB<ul><li>Encryption in transit using AES-CCM (SMB 3.0) and AES-GCM (SMB 3.1.1)</li></ul><br>NFS 4.1<ul><li>Encryption in transit using Kerberos with AES-256</li></ul><br> To learn more, see [security FAQ](../../azure-netapp-files/faq-security.md). |
| Access Options | <ul><li>Internet</li><li>Secure VNet access</li><li>VPN Gateway</li><li>ExpressRoute</li><li>Azure File Sync</li></ul><br> To learn more, see [network considerations](./storage-files-networking-overview.md). | <ul><li>Secure VNet access</li><li>VPN Gateway</li><li>ExpressRoute</li><li>[Global File Cache](https://cloud.netapp.com/global-file-cache/azure)</li><li>[HPC Cache](../../hpc-cache/hpc-cache-overview.md)</li></ul><br> To learn more, see [network considerations](../../azure-netapp-files/azure-netapp-files-network-topologies.md). | | Data Protection | <ul><li>Incremental snapshots</li><li>File/directory user self-restore</li><li>Restore to new location</li><li>In-place revert</li><li>Share-level soft delete</li><li>Azure Backup integration</li></ul><br> To learn more, see [Azure Files enhances data protection capabilities](https://azure.microsoft.com/blog/azure-files-enhances-data-protection-capabilities/). | <ul><li>Snapshots (255/volume)</li><li>File/directory user self-restore</li><li>Restore to new volume</li><li>In-place revert</li><li>[Cross-Region Replication](../../azure-netapp-files/cross-region-replication-introduction.md) </li></ul><br> To learn more, see [How Azure NetApp Files snapshots work](../../azure-netapp-files/snapshots-introduction.md). | | Migration Tools | <ul><li>Azure Data Box</li><li>Azure File Sync</li><li>Storage Migration Service</li><li>AzCopy</li><li>Robocopy</li></ul><br> To learn more, see [Migrate to Azure file shares](./storage-files-migration-overview.md). | <ul><li>[Global File Cache](https://cloud.netapp.com/global-file-cache/azure)</li><li>[CloudSync](https://cloud.netapp.com/cloud-sync-service), [XCP](https://xcp.netapp.com/)</li><li>Storage Migration Service</li><li>AzCopy</li><li>Robocopy</li><li>Application-based (for example, HSR, Data Guard, AOAG)</li></ul> |
storage Storage Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-planning.md
For more information, see [Introduction to Microsoft Defender for Storage](../..
## Redundancy [!INCLUDE [storage-files-redundancy-overview](../../../includes/storage-files-redundancy-overview.md)] +
+### Standard ZRS availability
+
+### Premium ZRS availability
+
+### Standard GZRS availability
+ ## Migration In many cases, you will not be establishing a net new file share for your organization, but instead migrating an existing file share from an on-premises file server or NAS device to Azure Files. Picking the right migration strategy and tool for your scenario is important for the success of your migration.
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
This error may occur if a domain controller that holds the RID Master FSMO role
This error is most likely triggered by a syntax error in the Join-AzStorageAccountforAuth command. Check the command for misspellings or syntax errors and verify that the latest version of the AzFilesHybrid module (https://github.com/Azure-Samples/azure-files-samples/releases) is installed.
-## Azure Files on-premises AD DS Authentication support for AES 256 Kerberos encryption
+## Azure Files on-premises AD DS Authentication support for AES-256 Kerberos encryption
-Azure Files supports AES 256 Kerberos encryption support for AD DS authentication with the [AzFilesHybrid module v0.2.2](https://github.com/Azure-Samples/azure-files-samples/releases). If you have enabled AD DS authentication with a module version lower than v0.2.2, you will need to download the latest AzFilesHybrid module (v0.2.2+) and run the PowerShell below. If you have not enabled AD DS authentication on your storage account yet, you can follow this [guidance](./storage-files-identity-ad-ds-enable.md#option-one-recommended-use-azfileshybrid-powershell-module) for enablement.
+Azure Files supports AES-256 Kerberos encryption for AD DS authentication with the [AzFilesHybrid module v0.2.2](https://github.com/Azure-Samples/azure-files-samples/releases). AES-256 is the recommended authentication method. If you have enabled AD DS authentication with a module version lower than v0.2.2, you will need to download the latest AzFilesHybrid module (v0.2.2+) and run the PowerShell below. If you have not enabled AD DS authentication on your storage account yet, you can follow this [guidance](./storage-files-identity-ad-ds-enable.md#option-one-recommended-use-azfileshybrid-powershell-module) for enablement.
```PowerShell $ResourceGroupName = "<resource-group-name-here>"
synapse-analytics Apache Spark Cdm Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-cdm-connector.md
There are three modes of authentication that can be used with the Spark CDM Conn
### Credential pass-through
-In Synapse, the Spark CDM Connector supports use of [Managed identities for Azure resource](/azure/active-directory/managed-identities-azure-resources/overview) to mediate access to the Azure datalake storage account containing the CDM folder. A managed identity is [automatically created for every Synapse workspace](/cli/azure/synapse/workspace/managed-identity). The connector uses the managed identity of the workspace that contains the notebook in which the connector is called to authenticate to the storage accounts being addressed.
+In Synapse, the Spark CDM Connector supports use of [Managed identities for Azure resource](../../../active-directory/managed-identities-azure-resources/overview.md) to mediate access to the Azure datalake storage account containing the CDM folder. A managed identity is [automatically created for every Synapse workspace](/cli/azure/synapse/workspace/managed-identity). The connector uses the managed identity of the workspace that contains the notebook in which the connector is called to authenticate to the storage accounts being addressed.
You must ensure the identity used is granted access to the appropriate storage accounts. Grant **Storage Blob Data Contributor** to allow the library to write to CDM folders, or **Storage Blob Data Reader** to allow only read access. In both cases, no extra connector options are required.
SaS Token Credential authentication to storage accounts is an extra option for a
### Credential-based access control options
-As an alternative to using a managed identity or a user identity, explicit credentials can be provided to enable the Spark CDM connector to access data. In Azure Active Directory, [create an App Registration](/azure/active-directory/develop/quickstart-register-app) and then grant this App Registration access to the storage account using either of the following roles: **Storage Blob Data Contributor** to allow the library to write to CDM folders, or **Storage Blob Data Reader** to allow only read.
+As an alternative to using a managed identity or a user identity, explicit credentials can be provided to enable the Spark CDM connector to access data. In Azure Active Directory, [create an App Registration](../../../active-directory/develop/quickstart-register-app.md) and then grant this App Registration access to the storage account using either of the following roles: **Storage Blob Data Contributor** to allow the library to write to CDM folders, or **Storage Blob Data Reader** to allow only read.
Once permissions are created, you can pass the app ID, app key, and tenant ID to the connector on each call to it using the options below. It's recommended to use Azure Key Vault to secure these values to ensure they aren't stored in clear text in your notebook file.
The following features aren't yet supported:
You can now look at the other Apache Spark connectors: * [Apache Spark Kusto connector](apache-spark-kusto-connector.md)
-* [Apache Spark SQL connector](apache-spark-sql-connector.md)
+* [Apache Spark SQL connector](apache-spark-sql-connector.md)
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
The error *The query references an object that is not supported in distributed p
### `WaitIOCompletion` call failed
-The error message *WaitIOCompletion call failed* indicates that the query failed while waiting to complete IO operation that reads data from the remote storage (Azure Data Lake).
-Make sure that your storage is placed in the same region as serverless SQL pool, and that you are not using `archive access` storage that is paused by default. Check the storage metrics and verify that there are no other workloads on the storage layer (uploading new files) that could saturate IO requests.
+The error message `WaitIOCompletion call failed` indicates that the query failed while waiting to complete I/O operation that reads data from the remote storage (Azure Data Lake).
+
+The error message has the following pattern:
+
+```
+Error handling external file: 'WaitIOCompletion call failed. HRESULT = ???'. File/External table name...
+```
+
+Make sure that your storage is placed in the same region as serverless SQL pool. Check the storage metrics and verify that there are no other workloads on the storage layer (uploading new files) that could saturate I/O requests.
+
+The field HRESULT contains the result code, below are the most common error codes and potential solutions:
+
+### [0x80070002](#tab/x80070002)
+
+This error code means the source file is not in storage.
+
+There are reasons why this can happen:
+
+- The file was deleted by another application.
+ - A common scenario: the query execution starts, it enumerates the files and the files are found. Later, during the query execution, a file is deleted (for example by Databricks, Spark or ADF). The query fails because the file is not found.
+ - This issue can also occur with delta format. The query might succeed on retry because there is a new version of the table and the deleted file is not queried again.
+
+- Invalid execution plan cached
+ - As a temporary mitigation, run the command `DBCC FREEPROCCACHE`. If the problem persists create a support ticket.
++
+### [0x80070005](#tab/x80070005)
+
+This error can occur when the authentication method is User Identity, also known as "Azure AD pass-through" and the Azure AD access token expires.
+
+The error message might also resemble:
+
+```
+File {path} cannot be opened because it does not exist or it is used by another process.
+```
+
+- If an Azure AD login has a connection open for more than 1 hour during query execution, any query that relies on Azure AD fails. This includes querying storage using Azure AD pass-through and statements that interact with Azure AD (like CREATE EXTERNAL PROVIDER). This affects tools that keep connections open, like in query editor in SSMS and ADS. Tools that open new connections to execute a query, like Synapse Studio, are not affected.
+
+- Azure AD authentication token might be cached by the client applications. For example Power BI caches Azure Active Directory token and reuses the same token for one hour. The long-running queries might fail if the token expires during execution.
+
+Consider the following mitigations:
+
+- Restart the client application to obtain a new Azure Active Directory token.
+- Consider switching to:
+ - [Service Principal](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types)
+ - [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#supported-storage-authorization-types)
+ - or [Shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#supported-storage-authorization-types)
++
+### [0x80070008](#tab/x80070008)
+
+This error message can occur when the serverless SQL pool is experiencing resource constraints, or if there was a transient platform issue.
+
+- Transient issues:
+ - This error can occur when Azure detects a potential platform issue that results in a change in topology to keep the service in a healthy state.
+ - This type of issue happens infrequently and is transient. Retry the query.
+
+- High concurrency or query complexity:
+ - Serverless SQL doesn't impose a maximum limit in query concurrency, it depends on the query complexity and the amount of data scanned.
+ - One serverless SQL pool can concurrently handle 1000 active sessions that are executing lightweight queries, but the numbers will drop if the queries are more complex or scan a larger amount of data. For more information, see [Concurrency limits for Serverless SQL Pool](resources-self-help-sql-on-demand.md#constraints).
+ - Try reducing the number of queries executing simultaneously or the query complexity.
+
+If the issue is non-transient or you confirmed the problem is not related to high concurrency or query complexity, create a support ticket.
++
+### [0x8007000C](#tab/x8007000C)
+
+This error code occurs when a query is executing and the source files are modified at the same time.
+The default behavior is to terminate the query execution with an error message.
+
+The error message returned can also have the following format:
+
+```
+"Cannot bulk load because the file 'https://????.dfs.core.windows.net/????' could not be opened. Operating system error code 12 (The access code is invalid.)."
+```
+
+If the source files are updated while the query is executing, it can cause inconsistent reads. For example, half row is read with the old version of the data, and half row is read with the newer version of the data.
++
+### CSV files
+
+If the problem occurs when reading CSV files, you can allow appendable files to be queried and updated at the same time, by using the option ALLOW_INCONSISTENT_READS.
+
+More information about syntax and usage:
+
+ - [OPENROWSET syntax](query-single-csv-file.md#querying-appendable-files)
+ ROWSET_OPTIONS = '{"READ_OPTIONS":["ALLOW_INCONSISTENT_READS"]}'
+
+ - [External Tables syntax](create-use-external-tables.md#external-table-on-appendable-files)
+ TABLE_OPTIONS = N'{"READ_OPTIONS":["ALLOW_INCONSISTENT_READS"]}'
+
+### Parquet files
+
+When reading Parquet files, the query will not recover automatically. It needs to be retried by the client application.
+
+### Synapse Link for Dataverse
+
+This error can occur when reading data from Synapse Link for Dataverse, when Synapse Link is syncing data to the lake and the data is being queried at the same time. The product group has a goal to improve this.
++
+### [0x800700A1](#tab/x800700A1)
+
+Confirm the storage account accessed is using the "Archive" access tier.
+
+The `archive access` tier is an offline tier. While a blob is in the `archive access` tier, it can't be read or modified.
+
+To read or download a blob in the Archive tier, rehydrate it to an online tier: [Archive access tier](/azure/storage/blobs/access-tiers-overview.md#archive-access-tier)
++
+### [0x80070057](#tab/x80070057)
+
+This error can occur when the authentication method is User Identity, also known as "Azure AD pass-through" and the Azure Active Directory access token expires.
+
+The error message might also resemble the following:
+
+```
+File {path} cannot be opened because it does not exist or it is used by another process.
+```
+
+- If an Azure AD login has a connection open for more than 1 hour during query execution, any query that relies on Azure AD fails. This includes querying storage using Azure AD pass-through and statements that interact with Azure AD (like CREATE EXTERNAL PROVIDER). This affects tools that keep connections open, like the query editor in SQL Server Management Studio (SSMS) and ADS. Tools that open new connections to execute a query, like Synapse Studio, are not affected.
+
+- Azure AD authentication token might be cached by the client applications. For example Power BI caches an Azure AD token and reuses it for one hour. The long-running queries might fail if the token expires in the middle of execution.
+
+Consider the following mitigations to resolve the issue:
+
+- Restart the client application to obtain a new Azure Active Directory token.
+- Consider switching to:
+ - [Service Principal](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types)
+ - [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#supported-storage-authorization-types)
+ - or [Shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#supported-storage-authorization-types)
+
+
+### [0x80072EE7](#tab/x80072EE7)
+
+This error code can occur when there is a transient issue in the serverless SQL pool.
+It happens infrequently and is temporary by nature. Retry the query.
+
+If the issue persists create a support ticket.
+++ ### Incorrect syntax near 'NOT'
The error *Incorrect syntax near 'NOT'* indicates that there are some external t
If your query returns `NULL` values instead of partitioning columns or cannot find the partition columns, you have few possible troubleshooting steps: - If you are using tables to query partitioned data set, note that tables do not support partitioning. Replace the table with the [partitioned views](create-use-views.md#partitioned-views).-- If you are using the [partitioned views](create-use-views.md#partitioned-views) with the OPENROWSET that [queries partitioned files using the FILEPATH() function](query-specific-files.md), make sure that you have correctly specified wildcard pattern in the location that that you have used the proper index for referencing the wildcard.
+- If you are using the [partitioned views](create-use-views.md#partitioned-views) with the OPENROWSET that [queries partitioned files using the FILEPATH() function](query-specific-files.md), make sure that you have correctly specified wildcard pattern in the location and that you have used the proper index for referencing the wildcard.
- If you are querying the files directly in the partitioned folder, note that the partitioning columns are not the parts of the file columns. The partitioning values are placed in the folder paths and not the files. Therefore, the files do not contain the partitioning values. ### Inserting value to batch for column type DATETIME2 failed
-The error *Inserting value to batch for column type DATETIME2 failed* indicates that the serverless pool cannot read the date values form the underlying files. The datetime value stored in Parquet/Delta Lake file cannot be represented as `DATETIME2` column. Inspect the minimum value in the file using spark and check are there some dates less than 0001-01-03. If you stored the files using the Spark 2.4, the date time values before are written using the Julian calendar that is not aligned with the Gregorian Proleptic calendar used in serverless SQL pools. There might be a 2-days difference between Julian calendar user to write the values in Parquet (in some Spark versions) and Gregorian Proleptic calendar used in serverless SQL pool, which might cause conversion to invalid (negative) date value.
+The error *Inserting value to batch for column type DATETIME2 failed* indicates that the serverless pool cannot read the date values from the underlying files. The datetime value stored in Parquet/Delta Lake file cannot be represented as `DATETIME2` column. Inspect the minimum value in the file using spark and check are there some dates less than 0001-01-03. If you stored the files using the Spark 2.4, the date time values before are written using the Julian calendar that is not aligned with the Gregorian Proleptic calendar used in serverless SQL pools. There might be a 2-days difference between Julian calendar user to write the values in Parquet (in some Spark versions) and Gregorian Proleptic calendar used in serverless SQL pool, which might cause conversion to invalid (negative) date value.
Try to use Spark to update these values because they are treated as invalid date values in SQL. The following sample shows how to update the values that are out of SQL date ranges to `NULL` in Delta Lake:
If you are getting the error '*CREATE DATABASE failed. User database limit has b
### Please create a master key in the database or open the master key in the session before performing this operation.
-If your query fails with the error message *Please create a master key in the database or open the master key in the session before performing this operation*, it means that your user database has no access to a master key at the moment.
+If your query fails with the error message '*Please create a master key in the database or open the master key in the session before performing this operation*', it means that your user database has no access to a master key at the moment.
Most likely, you just created a new user database and did not create a master key yet.
See the [Synapse Studio section](#synapse-studio).
### Cannot connect to Synapse pool from a tool Some tools might not have an explicit option that enables you to connect to the Synapse serverless SQL pool.
-Use an option that you would use to connect to SQL Server or Azure SQL database. The connection dialog do not need to be branded as "Synapse" because the serverless SQL pool use the same protocol as SQL Server or Azure SQL database.
+Use an option that you would use to connect to SQL Server or Azure SQL database. The connection dialog do not need to be branded as "Synapse" because the serverless SQL pool uses the same protocol as SQL Server or Azure SQL database.
Even if a tool enables you to enter only a logical server name and predefines `database.windows.net` domain, put the Synapse workspace name followed by `-ondemand` suffix and `database.windows.net` domain.
If a user cannot access a lake house or Spark database, it might not have permis
Dataverse tables are accessing storage using the callers Azure AD identity. SQL user with high permissions might try to select data from a table, but the table would not be able to access Dataverse data. This scenario is not supported. ### Azure AD service principal login failures when SPI is creating a role assignment
-If you want to create role assignment for Service Principal Identifier/Azure AD app using another SPI, or have already created one and it fails to login, you're probably receiving following error:
+If you want to create role assignment for Service Principal Identifier/Azure AD app using another SPI, or have already created one and it fails to log in, you're probably receiving following error:
``` Login error: Login failed for user '<token-identified principal>'. ```
go
**Solution #3**
-You can also setup service principal Synapse Admin using PowerShell. You need to have [Az.Synapse module](/powershell/module/az.synapse) installed.
+You can also set up service principal Synapse Admin using PowerShell. You need to have [Az.Synapse module](/powershell/module/az.synapse) installed.
The solution is to use cmdlet New-AzSynapseRoleAssignment with `-ObjectId "parameter"` - and in that parameter field to provide Application ID (instead of Object ID) using workspace admin Azure service principal credentials. PowerShell script: ```azurepowershell $spAppId = "<app_id_which_is_already_an_admin_on_the_workspace>"
Connect to serverless SQL endpoint and verify that the external login with SID `
select name, convert(uniqueidentifier, sid) as sid, create_date from sys.server_principals where type in ('E', 'X') ```
-or just try to login on serverless SQL endpoint using the just set admin app.
+or just try to log in on serverless SQL endpoint using the just set admin app.
## Constraints
virtual-desktop Deploy Windows Server Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-windows-server-virtual-machine.md
For more information, refer [Operating systems and licenses](prerequisites.md)
Use the following information to learn about how licensing works in Remote Desktop Services and to deploy and manage your licenses.
-[License your RDS deployment with client access licenses](https://docs.microsoft.com/windows-server/remote/remote-desktop-services/rds-client-access-license)
+[License your RDS deployment with client access licenses](/windows-server/remote/remote-desktop-services/rds-client-access-license)
If you're already using Windows Server based Remote Desktop Services, you'll likely have Licensing Server setup in your environment. You can continue using the same provided Azure Virtual Desktop hosts has line of sight to the Server.
Now that you've deployed Windows Server based Host VMs, you can sign in to a sup
- [Connect with the Windows Desktop client](user-documentation/connect-windows-7-10.md) - [Connect with the web client](user-documentation/connect-web.md)
-
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
To use the Required URL Check tool:
## Virtual machines
-The Azure virtual machines you create for Azure Virtual Desktop must have access to the following URLs in the Azure commercial cloud:
+You'll need to make sure that the Azure virtual machines you create for Azure Virtual Desktop have access to the URLs in one of the following sections based on which cloud you're using.
+
+### Azure public cloud
+The Azure virtual machines you create for Azure Virtual Desktop must have access to the following URLs in the Azure public cloud:
|Address|Outbound TCP port|Purpose|Service Tag| ||||| |*.wvd.microsoft.com|443|Service traffic|WindowsVirtualDesktop|
-|gcs.prod.monitoring.core.windows.net|443|Agent traffic|AzureCloud|
-|production.diagnostics.monitoring.core.windows.net|443|Agent traffic|AzureCloud|
-|*xt.blob.core.windows.net|443|Agent traffic|AzureCloud|
-|*eh.servicebus.windows.net|443|Agent traffic|AzureCloud|
-|*xt.table.core.windows.net|443|Agent traffic|AzureCloud|
-|*xt.queue.core.windows.net|443|Agent traffic|AzureCloud|
|*.prod.warm.ingest.monitor.core.windows.net|443|Agent traffic|AzureMonitor| |catalogartifact.azureedge.net|443|Azure Marketplace|AzureFrontDoor.Frontend| |kms.core.windows.net|1688|Windows activation|Internet|
The Azure virtual machines you create for Azure Virtual Desktop must have access
|wvdportalstorageblob.blob.core.windows.net|443|Azure portal support|Az