Updates from: 06/28/2022 01:07:08
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Enable Authentication Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-api.md
npm install passport-azure-ad
npm install morgan ```
-The [morgen package](https://www.npmjs.com/package/morgan) is an HTTP request logger middleware for Node.js.
+The [morgan package](https://www.npmjs.com/package/morgan) is an HTTP request logger middleware for Node.js.
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
The initialized *GraphServiceClient* is then used in _UserService.cs_ to perform
[Make API calls using the Microsoft Graph SDKs](/graph/sdks/create-requests) includes information on how to read and write information from Microsoft Graph, use `$select` to control the properties returned, provide custom query parameters, and use the `$filter` and `$orderBy` query parameters.
+## Next steps
+
+For code samples in JavaScript and Node.js, please see: [Manage B2C user accounts with MSAL.js and Microsoft Graph SDK](https://github.com/Azure-Samples/ms-identity-b2c-javascript-nodejs-management)
+ <!-- LINK --> [graph-objectIdentity]: /graph/api/resources/objectidentity
active-directory Application Proxy Configure Single Sign On With Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-with-headers.md
When you've completed all these steps, your app should be running and available.
## Considerations - Application Proxy is used to provide remote access to apps on-premises or on private cloud. Application Proxy is not recommended to handle traffic originating internally from the corporate network.-- Access to header-based authentication applications should be restricted to only traffic from the connector or other permitted header-based authentication solution. This is commonly done through restricting network access to the application using a firewall or IP restriction on the application server.
+- **Access to header-based authentication applications should be restricted to only traffic from the connector or other permitted header-based authentication solution**. This is commonly done through restricting network access to the application using a firewall or IP restriction on the application server to avoid exposing to the attackers.
## Next steps
active-directory Howto Mfa Nps Extension Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md
If you encounter errors with the NPS extension for Azure AD Multi-Factor Authent
| **CONTACT_SUPPORT** | [Contact support](#contact-microsoft-support), and mention the list of steps for collecting logs. Provide as much information as you can about what happened before the error, including tenant ID, and user principal name (UPN). | | **CLIENT_CERT_INSTALL_ERROR** | There may be an issue with how the client certificate was installed or associated with your tenant. Follow the instructions in [Troubleshooting the MFA NPS extension](howto-mfa-nps-extension.md#troubleshooting) to investigate client cert problems. | | **ESTS_TOKEN_ERROR** | Follow the instructions in [Troubleshooting the MFA NPS extension](howto-mfa-nps-extension.md#troubleshooting) to investigate client cert and security token problems. |
-| **HTTPS_COMMUNICATION_ERROR** | The NPS server is unable to receive responses from Azure AD MFA. Verify that your firewalls are open bidirectionally for traffic to and from `https://adnotifications.windowsazure.com` and that TLS 1.2 is enabled (default). If TLS 1.2 is disabled, user authentication will fail and event ID 36871 with source SChannel is entered in the System log in Event Viewer. To verify TLS 1.2 is enabled, see [TLS registry settings](/windows-server/security/tls/tls-registry-settings.md#tls-dtls-and-ssl-protocol-version-settings). |
+| **HTTPS_COMMUNICATION_ERROR** | The NPS server is unable to receive responses from Azure AD MFA. Verify that your firewalls are open bidirectionally for traffic to and from `https://adnotifications.windowsazure.com` and that TLS 1.2 is enabled (default). If TLS 1.2 is disabled, user authentication will fail and event ID 36871 with source SChannel is entered in the System log in Event Viewer. To verify TLS 1.2 is enabled, see [TLS registry settings](/windows-server/security/tls/tls-registry-settings#tls-dtls-and-ssl-protocol-version-settings). |
| **HTTP_CONNECT_ERROR** | On the server that runs the NPS extension, verify that you can reach `https://adnotifications.windowsazure.com` and `https://login.microsoftonline.com/`. If those sites don't load, troubleshoot connectivity on that server. | | **NPS Extension for Azure AD MFA:** <br> NPS Extension for Azure AD MFA only performs Secondary Auth for Radius requests in AccessAccept State. Request received for User username with response state AccessReject, ignoring request. | This error usually reflects an authentication failure in AD or that the NPS server is unable to receive responses from Azure AD. Verify that your firewalls are open bidirectionally for traffic to and from `https://adnotifications.windowsazure.com` and `https://login.microsoftonline.com` using ports 80 and 443. It is also important to check that on the DIAL-IN tab of Network Access Permissions, the setting is set to "control access through NPS Network Policy". This error can also trigger if the user is not assigned a license. | | **REGISTRY_CONFIG_ERROR** | A key is missing in the registry for the application, which may be because the [PowerShell script](howto-mfa-nps-extension.md#install-the-nps-extension) wasn't run after installation. The error message should include the missing key. Make sure you have the key under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMfa. |
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Previously updated : 01/27/2022 Last updated : 06/27/2022
The following client apps have been confirmed to support this setting:
- Microsoft Invoicing - Microsoft Kaizala - Microsoft Launcher-- Microsoft Lists (iOS)
+- Microsoft Lists
- Microsoft Office - Microsoft OneDrive - Microsoft OneNote
active-directory Active Directory Groups Membership Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-membership-azure-portal.md
Previously updated : 6/22/2022 Last updated : 10/19/2018
# Add or remove a group from another group using Azure Active Directory
-This article helps you to add and remove a group from another group using Azure Active Directory. When a group is added to another group, it creates a nested group.
+This article helps you to add and remove a group from another group using Azure Active Directory.
>[!Note] >If you're trying to delete the parent group, see [How to update or delete a group and its members](active-directory-groups-delete-group.md). ## Add a group to another group
-You can add an existing Security group to another existing Security group (also known as nested groups), which creates a member group (subgroup) and a parent group. The member group inherits the attributes and properties of the parent group, saving you configuration time.
+You can add an existing Security group to another existing Security group (also known as nested groups), creating a member group (subgroup) and a parent group. The member group inherits the attributes and properties of the parent group, saving you configuration time.
>[!Important]
->We don't currently support:<br>
->- Adding groups to a group synced with on-premises Active Directory.<br>
->- Adding Security groups to Microsoft 365 groups.<br>
->- Adding Microsoft 365 groups to Security groups or other Microsoft 365 groups.<br>
->- Assigning apps to nested groups.<br>
->- Applying licenses to nested groups.<br>
->- Adding distribution groups in nesting scenarios.<br>
->- Adding security groups as members of mail-enabled security groups.
-
+>We don't currently support:<ul><li>Adding groups to a group synced with on-premises Active Directory.</li><li>Adding Security groups to Microsoft 365 groups.</li><li>Adding Microsoft 365 groups to Security groups or other Microsoft 365 groups.</li><li>Assigning apps to nested groups.</li><li>Applying licenses to nested groups.</li><li>Adding distribution groups in nesting scenarios.</li><li>Adding security groups as members of mail-enabled security groups</li><li> Adding groups as members of a role-assignable group.</li></ul>
### To add a group as a member of another group
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
na Previously updated : 06/22/2022 Last updated : 03/22/2022
This article describes how to create one or more access reviews for group member
For more information, see [License requirements](access-reviews-overview.md#license-requirements).
-If you're reviewing access to an application, then before creating the review, see the article on how to [prepare for an access review of users' access to an application](access-reviews-application-preparation.md) to ensure the application is integrated with Azure AD.
+If you are reviewing access to an application, then before creating the review, see the article on how to [prepare for an access review of users' access to an application](access-reviews-application-preparation.md) to ensure the application is integrated with Azure AD.
## Create a single-stage access review
If you're reviewing access to an application, then before creating the review, s
> [!NOTE] > If you selected **All Microsoft 365 groups with guest users**, your only option is to review **Guest users only**.
+1. Or if you are conducting group membership review, you can create access reviews for only the inactive users in the group (preview). In the *Users scope* section, check the box next to **Inactive users (on tenant level)**. If you check the box, the scope of the review will focus on inactive users only, those who have not signed in either interactively or non-interactively to the tenant. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users in the group inactive for the specified number of days will be the only users in the review.
-1. After you select the scope of the review, you can determine how nested group membership is reviewed (Preview). On the **Nested groups** setting, select:
- - **Review all users assignments, including assignment from nested group membership** if you want to include indirect members in your review. Deny decisions won't be applied to indirect users.
- - Or, **Review only direct assignments, including direct users and unexpanded nested groups** if you want to only review direct members and groups. Indirect members and groups won't be included in the review and decisions are applied to direct users and groups only. For more information about access reviews of nested group memberships see [Review access of a nested group (preview)](manage-access-review.md#review-access-of-nested-group-membership-preview).
-1. If you scoped the review to **All users and groups** and chose **Review only direct assignments, including direct users and unexpanded nested groups**, when you select a reviewer, your selection options are limited:
- - If you select **Managers of users** as the reviewer, a fallback reviewer must be selected to review the groups with access to the nested group.
- - If you select **Users review their own access** as the reviewer, the nested groups won't be included in the review. To have the groups reviewed, you must select a different reviewer and not a self-review.
-1. Or if you are conducting group membership review, you can create access reviews for only the inactive users in the group (preview). In the *Users scope* section, check the box next to **Inactive users (on tenant level)**. If you check the box, the scope of the review will focus on inactive users only, those who haven't signed in either interactively or non-interactively to the tenant. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users in the group inactive for the specified number of days will be the only users in the review.
1. Select **Next: Reviews**. ### Next: Reviews
-1. You can create a single-stage or multi-stage review (preview). For a single stage review, continue here. To create a multi-stage access review (preview), follow the steps in [Create a multi-stage access review (preview)](#create-a-multi-stage-access-review-preview).
+1. You can create a single-stage or multi-stage review (preview). For a single stage review continue here. To create a multi-stage access review (preview), follow the steps in [Create a multi-stage access review (preview)](#create-a-multi-stage-access-review-preview)
1. In the **Specify reviewers** section, in the **Select reviewers** box, select either one or more people to make decisions in the access reviews. You can choose from:
A multi-stage review allows the administrator to define two or three sets of rev
> [!WARNING] > Data of users included in multi-stage access reviews are a part of the audit record at the start of the review. Administrators may delete the data at any time by deleting the multi-stage access review series. For general information about GDPR and protecting user data, see the [GDPR section of the Microsoft Trust Center](https://www.microsoft.com/trust-center/privacy/gdpr-overview) and the [GDPR section of the Service Trust portal](https://servicetrust.microsoft.com/ViewPage/GDPRGetStarted).
-1. After you've selected the resource and scope of your review, move on to the **Reviews** tab.
+1. After you have selected the resource and scope of your review, move on to the **Reviews** tab.
-1. Select the checkbox next to **(Preview) Multi-stage review**.
+1. Click the checkbox next to **(Preview) Multi-stage review**.
1. Under **First stage review**, select the reviewers from the dropdown menu next to **Select reviewers**.
A multi-stage review allows the administrator to define two or three sets of rev
1. Add the duration for the second stage.
-1. By default, you'll see two stages when you create a multi-stage review. However, you can add up to three stages. If you want to add a third stage, select **+ Add a stage** and complete the required fields.
+1. By default, you will see two stages when you create a multi-stage review. However, you can add up to three stages. If you want to add a third stage, click **+ Add a stage** and complete the required fields.
-1. You can decide to allow 2nd and 3rd stage reviewers to the see decisions made in the previous stage(s).If you want to allow them to see the decisions made prior, select the box next to **Show previous stage(s) decisions to later stage reviewers** under **Reveal review results**. Leave the box unchecked to disable this setting if youΓÇÖd like your reviewers to review independently.
+1. You can decide to allow 2nd and 3rd stage reviewers to the see decisions made in the previous stage(s).If you want to allow them to see the decisions made prior, click the box next to **Show previous stage(s) decisions to later stage reviewers** under **Reveal review results**. Leave the box unchecked to disable this setting if youΓÇÖd like your reviewers to review independently.
![Screenshot that shows duration and show previous stages setting enabled for multi-stage review.](./media/create-access-review/reveal-multi-stage-results-and-duration.png) 1. The duration of each recurrence will be set to the sum of the duration day(s) you specified in each stage.
-1. Specify the **Review recurrence**, the **Start date**, and **End date** for the review. The recurrence type must be at least as long as the total duration of the recurrence (for example, the max duration for a weekly review recurrence is seven days).
+1. Specify the **Review recurrence**, the **Start date**, and **End date** for the review. The recurrence type must be at least as long as the total duration of the recurrence (i.e., the max duration for a weekly review recurrence is 7 days).
1. To specify which reviewees will continue from stage to stage, select one or multiple of the following options next to **Specify reviewees to go to next stage** : ![Screenshot that shows specify reviewees setting and options for multi-stage review.](./media/create-access-review/next-stage-reviewees-setting.png)
Use the following instructions to create an access review on a team with shared
1. Select **+ New access review**.
-1. Select **Teams + Groups** and then click **Select teams + groups** to set the **Review scope**. B2B direct connect users and teams aren't included in reviews of **All Microsoft 365 groups with guest users**.
+1. Select **Teams + Groups** and then click **Select teams + groups** to set the **Review scope**. B2B direct connect users and teams are not included in reviews of **All Microsoft 365 groups with guest users**.
1. Select a Team that has shared channels shared with 1 or more B2B direct connect users or Teams.
Use the following instructions to create an access review on a team with shared
> - If you set **Select reviewers** to **Users review their own access** or **Managers of users**, B2B direct connect users and Teams won't be able to review their own access in your tenant. The owner of the Team under review will get an email that asks the owner to review the B2B direct connect user and Teams. > - If you select **Managers of users**, a selected fallback reviewer will review any user without a manager in the home tenant. This includes B2B direct connect users and Teams without a manager.
-1. Go on to the **Settings** tab and configure extra settings. Then go to the **Review and Create** tab to start your access review. For more detailed information about creating a review and configuration settings, see our [Create a single-stage access review](#create-a-single-stage-access-review).
+1. Go on to the **Settings** tab and configure additional settings. Then go to the **Review and Create** tab to start your access review. For more detailed information about creating a review and configuration settings, see our [Create a single-stage access review](#create-a-single-stage-access-review).
## Allow group owners to create and manage access reviews of their groups (preview)
active-directory Manage Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-access-review.md
na Previously updated : 04/29/2022 Last updated : 08/20/2021
When reviewing guest user access to Microsoft 365 groups, you can either create
You can then decide whether to ask each guest to review their own access or to ask one or more users to review every guest's access. These scenarios are covered in the following sections.-
-### Review access of nested group membership (Preview)
-For some scenarios, access to resources such as security groups, enterprise applications, and privileged roles can be granted through a security group assigned access to the resource. To learn more, go to [Add or remove a group from another group](../fundamentals/active-directory-groups-membership-azure-portal.md).
-
-Administrators can perform an access review of members of nested groups. When the administrator creates the review, they can choose whether their reviewers can make decisions on indirect members or only on direct members. An example of an indirect user is a user that has access to a security group that has access to another security group, application or role.
-
-![Diagram showing example of nested group membership.](media/manage-access-review/nested-group-membership-access-review.png)
-
-If the administrator decides to only allow reviews on direct members, reviewers can approve and deny access for nested groups or role-assignable groups as an entity. If denied, the nested group or role-assignable group will lose access to the resource.
-
-1. To create an access review of a nested group, go to [Create an access review of groups or applications](create-access-review.md#scope) and follow the guidance on nested groups.
-
-2. To review access of a nested group, go to [Review access for nested group memberships (preview)](perform-access-review.md#review-access-for-nested-group-memberships-preview).
### Ask guests to review their own membership in a group
active-directory Perform Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/perform-access-review.md
na Previously updated : 6/22/2022 Last updated : 2/18/2022
# Review access to groups and applications in Azure AD access reviews
-Azure Active Directory (Azure AD) simplifies how enterprises manage access to groups and applications in Azure AD, and other Microsoft Online Services with a feature called Azure AD access reviews. This article will go over how a designated reviewer performs an access review for members of a group or users with access to an application. If you would like to review access to an access package, read [Review access of an access package in Azure AD entitlement management](entitlement-management-access-reviews-review-access.md)
+Azure Active Directory (Azure AD) simplifies how enterprises manage access to groups and applications in Azure AD and other Microsoft Online Services with a feature called Azure AD access reviews. This article will go over how a designated reviewer performs an access review for members of a group or users with access to an application. If you would like to review access to an access package read [Review access of an access package in Azure AD entitlement management](entitlement-management-access-reviews-review-access.md)
## Perform access review using My Access You can review access to groups and applications via My Access, an end-user friendly portal for granting, approving, and reviewing access needs.
You can review access to groups and applications via My Access, an end-user frie
![Example email from Microsoft to review access to a group](./media/perform-access-review/access-review-email-preview.png)
-1. Select the **Start review** link to open the access review.git pu
+1. Click the **Start review** link to open the access review.git pu
### Navigate directly to My Access
You can also view your pending access reviews by using your browser to open My A
## Review access for one or more users
-After you open My Access under Groups and Apps, you can see:
+After you open My Access under Groups and Apps you can see:
- **Name** The name of the access review.-- **Due** The due date for the review. After this date, denied users could be removed from the group or app being reviewed.
+- **Due** The due date for the review. After this date denied users could be removed from the group or app being reviewed.
- **Resource** The name of the resource under review. - **Progress** The number of users reviewed over the total number of users part of this access review.
-Select on the name of an access review to get started.
+Click on the name of an access review to get started.
![Pending access reviews list for apps and groups](./media/perform-access-review/access-reviews-list-preview.png)
-Once that it opens, you'll see the list of users in scope for the access review.
+Once that it opens, you will see the list of users in scope for the access review.
> [!NOTE] > If the request is to review your own access, the page will look different. For more information, see [Review access for yourself to groups or applications](review-your-access.md).
There are two ways that you can approve or deny access:
1. Select one or more users by clicking the circle next to their names. 1. Select **Approve** or **Deny** on the bar above.
- - If you're unsure if a user should continue to have access or not, you can select **Don't know**. The user gets to keep their access and your choice is recorded in the audit logs. It's important that you keep in mind that any information you provide will be available to other reviewers. They can read your comments and take them into account when they review the request.
+ - If you are unsure if a user should continue to have access or not, you can click **Don't know**. The user gets to keep their access and your choice is recorded in the audit logs. It is important that you keep in mind that any information you provide will be available to other reviewers. They can read your comments and take them into account when they review the request.
![Open access review listing the users who need review](./media/perform-access-review/user-list-preview.png)
-1. The administrator of the access review may require that you supply a reason in the **Reason** box for your decision. Even when a reason isn't required. You can still provide a reason for your decision and the information that you include will be available to other approvers for review.
+1. The administrator of the access review may require that you supply a reason in the **Reason** box for your decision. Even when a reason is not required. You can still provide a reason for your decision and the information that you include will be available to other approvers for review.
-1. Select **Submit**.
+1. Click **Submit**.
- You can change your response at any time until the access review has ended. If you want to change your response, select the row and update the response. For example, you can approve a previously denied user or deny a previously approved user. > [!IMPORTANT]
There are two ways that you can approve or deny access:
To make access reviews easier and faster for you, we also provide recommendations that you can accept with a single click. The recommendations are generated based on the user's sign-in activity.
-1. Select one or more users and then select **Accept recommendations**.
+1. Select one or more users and then Click **Accept recommendations**.
![Open access review listing showing the Accept recommendations button](./media/perform-access-review/accept-recommendations-preview.png) 1. Or to accept recommendations for all unreviewed users, make sure that no users are selected and click on the **Accept recommendations** button on the top bar.
-1. Select **Submit** to accept the recommendations.
+1. Click **Submit** to accept the recommendations.
> [!NOTE]
To make access reviews easier and faster for you, we also provide recommendation
If multi-stage access reviews have been enabled by the administrator, there will be 2 or 3 total stages of review. Each stage of review will have a specified reviewer.
-You'll review access either manually or accept the recommendations based on sign-in activity for the stage you're assigned as the reviewer.
+You will review access either manually or accept the recommendations based on sign-in activity for the stage you are assigned as the reviewer.
-If you're the 2nd stage or 3rd stage reviewer, you'll also see the decisions made by the reviewers in the prior stage(s) if the administrator enabled this setting when creating the access review. The decision made by a 2nd or 3rd stage reviewer will overwrite the previous stage. So, the decision the 2nd stage reviewer makes will overwrite the first stage, and the 3rd stage reviewer's decision will overwrite the second stage.
+If you are the 2nd stage or 3rd stage reviewer, you will also see the decisions made by the reviewers in the prior stage(s) if the administrator enabled this setting when creating the access review. The decision made by a 2nd or 3rd stage reviewer will overwrite the previous stage. So, the decision the 2nd stage reviewer makes will overwrite the first stage, and the 3rd stage reviewer's decision will overwrite the second stage.
![Select user to show the multi-stage access review results](./media/perform-access-review/multi-stage-access-review.png)
Approve or deny access as outlined in [Review access for one or more users](#rev
To review access of B2B direct connect users, use the following instructions:
-1. As the reviewer, you should receive an email that requests you to review access for the team or group. Select the link in the email, or navigate directly to https://myaccess.microsoft.com/.
+1. As the reviewer, you should receive an email that requests you to review access for the team or group. Click the link in the email, or navigate directly to https://myaccess.microsoft.com/.
1. Follow the instructions in [Review access for one or more users](#review-access-for-one-or-more-users) to make decisions to approve or deny the users access to the Teams. > [!NOTE] > Unlike internal users and B2B Collaboration users, B2B direct connect users and Teams **don't** have recommendations based on last sign-in activity to make decisions when you perform the review.
-If a Team you review has shared channels, all B2B direct connect users and teams that access those shared channels are part of the review. B2B collaboration users and internal users are included in this review. When a B2B direct connect user or team is denied access in an access review, the user will lose access to every shared channel in the Team. To learn more about B2B direct connect users, read [B2B direct connect](../external-identities/b2b-direct-connect-overview.md).
-
-## Review access for nested group memberships (preview)
-To review access of nested group members:
-
-1. Follow the link in the notification email or go directly to
-https://myaccess.microsoft.com/ to complete the review.
-
-1. If the review creator chooses to include groups in the review, youΓÇÖll see them listed in the
-review as either a user or a group within the resource.
-
-Resources include:
-- security groups-- applications-- Azure roles-- Azure AD roles-
-> [!Note]
-> M365 groups and access packages don't support nested groups, so you can't review access for these resource types in a nested group scenario.
+If a Team you review has shared channels, all B2B direct connect users and teams that access those shared channels are part of the review. This includes B2B collaboration users and internal users. When a B2B direct connect user or team is denied access in an access review, the user will lose access to every shared channel in the Team. To learn more about B2B direct connect users, read [B2B direct connect](../external-identities/b2b-direct-connect-overview.md).
## If no action is taken on access review
-When the access review is set up, the administrator can use advanced settings to determine what will happen in the event a reviewer doesn't respond to an access review request.
+When the access review is setup, the administrator has the option to use advanced settings to determine what will happen in the event a reviewer doesn't respond to an access review request.
-The administrator can set up the review so that if reviewers don't respond at the end of the review period, all unreviewed users can have an automatic decision made on their access. This decision can include the loss of access to the group or application under review.
+The administrator can set up the review so that if reviewers do not respond at the end of the review period, all unreviewed users can have an automatic decision made on their access. This includes the loss of access to the group or application under review.
## Next steps
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
The following section describes, in-depth, how password hash synchronization wor
3. After the password hash synchronization agent has the encrypted envelope, it uses [MD5CryptoServiceProvider](/dotnet/api/system.security.cryptography.md5cryptoserviceprovider) and the salt to generate a key to decrypt the received data back to its original MD4 format. The password hash synchronization agent never has access to the clear text password. The password hash synchronization agentΓÇÖs use of MD5 is strictly for replication protocol compatibility with the DC, and it is only used on-premises between the DC and the password hash synchronization agent. 4. The password hash synchronization agent expands the 16-byte binary password hash to 64 bytes by first converting the hash to a 32-byte hexadecimal string, then converting this string back into binary with UTF-16 encoding. 5. The password hash synchronization agent adds a per user salt, consisting of a 10-byte length salt, to the 64-byte binary to further protect the original hash.
-6. The password hash synchronization agent then combines the MD4 hash plus the per user salt, and inputs it into the [PBKDF2](https://www.ietf.org/rfc/rfc2898.txt) function. 1000 iterations of the [HMAC-SHA256](/dotnet/api/system.security.cryptography.hmacsha256) keyed hashing algorithm are used.
+6. The password hash synchronization agent then combines the MD4 hash plus the per user salt, and inputs it into the [PBKDF2](https://www.ietf.org/rfc/rfc2898.txt) function. 1000 iterations of the [HMAC-SHA256](/dotnet/api/system.security.cryptography.hmacsha256) keyed hashing algorithm are used. For additional details, refer to the [Azure AD Whitepaper](https://aka.ms/aaddatawhitepaper).
7. The password hash synchronization agent takes the resulting 32-byte hash, concatenates both the per user salt and the number of SHA256 iterations to it (for use by Azure AD), then transmits the string from Azure AD Connect to Azure AD over TLS.</br> 8. When a user attempts to sign in to Azure AD and enters their password, the password is run through the same MD4+salt+PBKDF2+HMAC-SHA256 process. If the resulting hash matches the hash stored in Azure AD, the user has entered the correct password and is authenticated.
active-directory Qs Configure Powershell Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md
Title: Configure managed identities on an Azure VM using PowerShell - Azure AD
description: Step-by-step instructions for configuring managed identities for Azure resources on an Azure VM using PowerShell. -+ na Previously updated : 01/11/2022 Last updated : 06/24/2022
In this article, using PowerShell, you learn how to perform the following manage
## System-assigned managed identity
-In this section, you will learn how to enable and disable the system-assigned managed identity using Azure PowerShell.
+In this section, you'll learn how to enable and disable the system-assigned managed identity using Azure PowerShell.
### Enable system-assigned managed identity during creation of an Azure VM
To assign a user-assigned identity to a VM, your account needs the [Virtual Mach
To assign a user-assigned identity to a VM, your account needs the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor) and [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) role assignments. No other Azure AD directory role assignments are required.
-1. Create a user-assigned managed identity using the [New-AzUserAssignedIdentity](/powershell/module/az.managedserviceidentity/new-azuserassignedidentity) cmdlet. Note the `Id` in the output because you will need this in the next step.
+1. Create a user-assigned managed identity using the [New-AzUserAssignedIdentity](/powershell/module/az.managedserviceidentity/new-azuserassignedidentity) cmdlet. Note the `Id` in the output because you'll need this information in the next step.
> [!IMPORTANT] > Creating user-assigned managed identities only supports alphanumeric, underscore and hyphen (0-9 or a-z or A-Z, \_ or -) characters. Additionally, name should be limited from 3 to 128 character length for the assignment to VM/VMSS to work properly. For more information, see [FAQs and known issues](known-issues.md)
If your VM has multiple user-assigned managed identities, you can remove all but
$vm = Get-AzVm -ResourceGroupName myResourceGroup -Name myVm Update-AzVm -ResourceGroupName myResourceGroup -VirtualMachine $vm -IdentityType UserAssigned -IdentityID <USER ASSIGNED IDENTITY NAME> ```
-If your VM does not have a system-assigned managed identity and you want to remove all user-assigned managed identities from it, use the following command:
+If your VM doesn't have a system-assigned managed identity and you want to remove all user-assigned managed identities from it, use the following command:
```azurepowershell-interactive $vm = Get-AzVm -ResourceGroupName myResourceGroup -Name myVm
active-directory Azure Ad Pim Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow.md
na Previously updated : 10/07/2021 Last updated : 06/24/2022
# Approve or deny requests for Azure AD roles in Privileged Identity Management
-With Azure Active Directory (Azure AD) Privileged Identity Management (PIM), you can configure roles to require approval for activation, and choose one or multiple users or groups as delegated approvers. Delegated approvers have 24 hours to approve requests. If a request is not approved within 24 hours, then the eligible user must re-submit a new request. The 24 hour approval time window is not configurable.
+With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can configure roles to require approval for activation, and choose one or multiple users or groups as delegated approvers. Delegated approvers have 24 hours to approve requests. If a request is not approved within 24 hours, then the eligible user must re-submit a new request. The 24 hour approval time window is not configurable.
## View pending requests
active-directory Azure Pim Resource Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-pim-resource-rbac.md
Previously updated : 04/20/2021 Last updated : 06/24/2022 # View activity and audit history for Azure resource roles in Privileged Identity Management
-With Azure Active Directory (Azure AD) Privileged Identity Management (PIM), you can view activity, activations, and audit history for Azure resources roles within your organization. This includes subscriptions, resource groups, and even virtual machines. Any resource within the Azure portal that leverages the Azure role-based access control functionality can take advantage of the security and lifecycle management capabilities in Privileged Identity Management. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Azure AD logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md).
+With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can view activity, activations, and audit history for Azure resources roles within your organization. This includes subscriptions, resource groups, and even virtual machines. Any resource within the Azure portal that leverages the Azure role-based access control functionality can take advantage of the security and lifecycle management capabilities in Privileged Identity Management. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Azure AD logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md).
> [!NOTE] > If your organization has outsourced management functions to a service provider who uses [Azure Lighthouse](../../lighthouse/overview.md), role assignments authorized by that service provider won't be shown here.
active-directory Concept Privileged Access Versus Role Assignable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-privileged-access-versus-role-assignable.md
na Previously updated : 05/18/2022 Last updated : 06/24/2022
Privileged Identity Management (PIM) supports the ability to enable privileged a
## What are Azure AD role-assignable groups?
-Azure Active Directory (Azure AD) lets you assign a cloud Azure AD security group to an Azure AD role. A Global Administrator or Privileged Role Administrator must create a new security group and make the group role-assignable at creation time. Only the Global Administrator, Privileged Role Administrator, or the group Owner role assignments can change the membership of the group. Also, no other users can reset the password of the users who are members of the group. This feature helps prevent an admin from elevating to a higher privileged role without going through a request and approval procedure.
+Azure Active Directory (Azure AD), part of Microsoft Entra, lets you assign a cloud Azure AD security group to an Azure AD role. A Global Administrator or Privileged Role Administrator must create a new security group and make the group role-assignable at creation time. Only the Global Administrator, Privileged Role Administrator, or the group Owner role assignments can change the membership of the group. Also, no other users can reset the password of the users who are members of the group. This feature helps prevent an admin from elevating to a higher privileged role without going through a request and approval procedure.
## What are Privileged Access groups?
active-directory Groups Activate Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-activate-roles.md
na Previously updated : 02/02/2022 Last updated : 02/24/2022
# Activate my privileged access group roles in Privileged Identity Management
-Use Privileged Identity Management (PIM) to allow eligible role members for privileged access groups to schedule role activation for a specified date and time. They can also select a activation duration up to the maximum duration configured by administrators.
+Use Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra,to allow eligible role members for privileged access groups to schedule role activation for a specified date and time. They can also select a activation duration up to the maximum duration configured by administrators.
This article is for eligible members who want to activate their privileged access group role in Privileged Identity Management.
active-directory Groups Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-approval-workflow.md
na Previously updated : 10/07/2021 Last updated : 06/24/2022
# Approve activation requests for privileged access group members and owners (preview)
-With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), you can configure privileged access group members and owners to require approval for activation, and choose users or groups from your Azure AD organization as delegated approvers. We recommend selecting two or more approvers for each group to reduce workload for the privileged role administrator. Delegated approvers have 24 hours to approve requests. If a request is not approved within 24 hours, then the eligible user must re-submit a new request. The 24 hour approval time window is not configurable.
+With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can configure privileged access group members and owners to require approval for activation, and choose users or groups from your Azure AD organization as delegated approvers. We recommend selecting two or more approvers for each group to reduce workload for the privileged role administrator. Delegated approvers have 24 hours to approve requests. If a request is not approved within 24 hours, then the eligible user must re-submit a new request. The 24 hour approval time window is not configurable.
Follow the steps in this article to approve or deny requests for Azure resource roles.
active-directory Groups Assign Member Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md
na Previously updated : 02/02/2022 Last updated : 06/24/2022
# Assign eligibility for a privileged access group (preview) in Privileged Identity Management
-Azure Active Directory (Azure AD) Privileged Identity Management (PIM) can help you manage the eligibility and activation of assignments to privileged access groups in Azure AD. You can assign eligibility to members or owners of the group.
+Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, can help you manage the eligibility and activation of assignments to privileged access groups in Azure AD. You can assign eligibility to members or owners of the group.
When a role is assigned, the assignment: - Can't be assigned for a duration of less than five minutes
active-directory Groups Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-audit.md
Previously updated : 10/07/2021 Last updated : 06/24/2022 # Audit activity history for privileged access group assignments (preview) in Privileged Identity Management
-With Privileged Identity Management (PIM), you can view activity, activations, and audit history for Azure privileged access group members and owners within your Azure Active Directory (Azure AD) organization.
+With Privileged Identity Management (PIM), you can view activity, activations, and audit history for Azure privileged access group members and owners within your organization in Azure Active Directory (Azure AD), part of Microsoft Entra.
> [!NOTE] > If your organization has outsourced management functions to a service provider who uses [Azure Lighthouse](../../lighthouse/overview.md), role assignments authorized by that service provider won't be shown here.
active-directory Groups Discover Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-discover-groups.md
na Previously updated : 12/02/2020 Last updated : 06/24/2022
# Bring privileged access groups (preview) into Privileged Identity Management
-In Azure Active Directory (Azure AD), you can assign Azure AD built-in roles to cloud groups to simplify how you manage role assignments. To protect Azure AD roles and to secure access, you can now use Privileged Identity Management (PIM) to manage just-in-time access for members or owners of these groups. To manage an Azure AD role-assignable group as a privileged access group in Privileged Identity Management, you must bring it under management in PIM.
+In Azure Active Directory (Azure AD), part of Microsoft Entra, you can assign Azure AD built-in roles to cloud groups to simplify how you manage role assignments. To protect Azure AD roles and to secure access, you can now use Privileged Identity Management (PIM) to manage just-in-time access for members or owners of these groups. To manage an Azure AD role-assignable group as a privileged access group in Privileged Identity Management, you must bring it under management in PIM.
## Identify groups to manage
active-directory Groups Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-features.md
na Previously updated : 04/18/2022 Last updated : 06/24/2022
# Management capabilities for Privileged Access groups (preview)
-In Privileged Identity Management (PIM), you can now assign eligibility for membership or ownership of privileged access groups. Starting with this preview, you can assign Azure Active Directory (Azure AD) built-in roles to cloud groups and use PIM to manage group member and owner eligibility and activation. For more information about role-assignable groups in Azure AD, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md).
+In Privileged Identity Management (PIM), you can now assign eligibility for membership or ownership of privileged access groups. Starting with this preview, you can assign built-in roles in Azure Active Directory (Azure AD), part of Microsoft Entra, to cloud groups and use PIM to manage group member and owner eligibility and activation. For more information about role-assignable groups in Azure AD, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md).
-> [!Important]
+> [!IMPORTANT]
> To provide a group of users with just-in-time access to roles with permissions in SharePoint, Exchange, or Security & Compliance Center, be sure to make permanent assignments of users to the group, and then assign the group to a role as eligible for activation. If instead you assign a role permanently to a group and and assign users to be eligible to group membership, it might take significant time to have all permissions of the role activated and ready to use. > [!NOTE]
active-directory Groups Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-renew-extend.md
na Previously updated : 10/07/2021 Last updated : 06/24/2022
# Extend or renew privileged access group assignments (preview) in Privileged Identity Management
-Azure Active Directory (Azure AD) Privileged Identity Management (PIM) provides controls to manage the access and assignment lifecycle for privileged access groups. Administrators can assign roles using start and end date-time properties. When the assignment end approaches, Privileged Identity Management sends email notifications to the affected users or groups. It also sends email notifications to administrators of the resource to ensure that appropriate access is maintained. Assignments might be renewed and remain visible in an expired state for up to 30 days, even if access is not extended.
+Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, provides controls to manage the access and assignment lifecycle for privileged access groups. Administrators can assign roles using start and end date-time properties. When the assignment end approaches, Privileged Identity Management sends email notifications to the affected users or groups. It also sends email notifications to administrators of the resource to ensure that appropriate access is maintained. Assignments might be renewed and remain visible in an expired state for up to 30 days, even if access is not extended.
## Who can extend and renew
active-directory Groups Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md
na Previously updated : 11/12/2021 Last updated : 06/24/2022
# Configure privileged access group settings (preview) in Privileged Identity Management
-Role settings are the default settings that are applied to group owner and group member privileged access assignments in Privileged Identity Management (PIM). Use the following steps to set up the approval workflow to specify who can approve or deny requests to elevate privilege.
+Role settings are the default settings that are applied to group owner and group member privileged access assignments in Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra. Use the following steps to set up the approval workflow to specify who can approve or deny requests to elevate privilege.
## Open role settings
active-directory Pim Create Azure Ad Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md
Previously updated : 6/2/2022 Last updated : 10/07/2021
The need for access to privileged Azure resource and Azure AD roles by employees
## Prerequisites To create access reviews for Azure resources, you must be assigned to the [Owner](../../role-based-access-control/built-in-roles.md#owner) or the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role for the Azure resources. To create access reviews for Azure AD roles, you must be assigned to the [Global Administrator](../roles/permissions-reference.md#global-administrator) or the [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
The need for access to privileged Azure resource and Azure AD roles by employees
3. For **Azure AD roles**, select **Azure AD roles** under **Privileged Identity Management**. For **Azure resources**, select **Azure resources** under **Privileged Identity Management**.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Screenshot of select Identity Governance button in Azure portal." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Select Identity Governance in Azure Portal screenshot." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png":::
4. For **Azure AD roles**, select **Azure AD roles** again under **Manage**. For **Azure resources**, select the subscription you want to manage. 5. Under Manage, select **Access reviews**, and then select **New** to create a new access review.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/access-reviews.png" alt-text="Screenshot of access reviews list showing the status of all reviews.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/access-reviews.png" alt-text="Azure AD roles - Access reviews list showing the status of all reviews screenshot.":::
6. Name the access review. Optionally, give the review a description. The name and description are shown to the reviewers.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/name-description.png" alt-text="Screenshot of review name and description.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/name-description.png" alt-text="Create an access review - Review name and description screenshot.":::
7. Set the **Start date**. By default, an access review occurs once, starts the same time it's created, and it ends in one month. You can change the start and end dates to have an access review start in the future and last however many days you want.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/start-end-dates.png" alt-text="Screenshot of Start date, frequency, duration, end, number of times, and end date fields.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/start-end-dates.png" alt-text="Start date, frequency, duration, end, number of times, and end date screenshot.":::
8. To make the access review recurring, change the **Frequency** setting from **One time** to **Weekly**, **Monthly**, **Quarterly**, **Annually**, or **Semi-annually**. Use the **Duration** slider or text box to define how many days each review of the recurring series will be open for input from reviewers. For example, the maximum duration that you can set for a monthly review is 27 days, to avoid overlapping reviews.
The need for access to privileged Azure resource and Azure AD roles by employees
10. In the **Users Scope** section, select the scope of the review. For **Azure AD roles**, the first scope option is Users and Groups. Directly assigned users and [role-assignable groups](../roles/groups-concept.md) will be included in this selection. For **Azure resource roles**, the first scope will be Users. Groups assigned to Azure resource roles are expanded to display transitive user assignments in the review with this selection. You may also select **Service Principals** to review the machine accounts with direct access to either the Azure resource or Azure AD role.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/users.png" alt-text="Screenshot of Users scope to review role membership section.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/users.png" alt-text="Users scope to review role membership of screenshot.":::
11. Or, you can create access reviews only for inactive users (preview). In the *Users scope* section, set the **Inactive users (on tenant level) only** to **true**. If the toggle is set to *true*, the scope of the review will focus on inactive users only. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users inactive for the specified number of days will be the only users in the review.
The need for access to privileged Azure resource and Azure AD roles by employees
> [!NOTE] > Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/review-role-membership.png" alt-text="Screenshot of review role memberships option.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/review-role-membership.png" alt-text="Review role memberships screenshot.":::
13. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/assignment-type-select.png" alt-text="Screenshot of reviewers list of assignment types.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/assignment-type-select.png" alt-text="Reviewers list of assignment types screenshot.":::
14. In the **Reviewers** section, select one or more people to review all the users. Or you can select to have the members review their own access.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/reviewers.png" alt-text="Screenshot of reviewers list of selected users or members (self) button.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/reviewers.png" alt-text="Reviewers list of selected users or members (self)":::
- **Selected users** - Use this option to designate a specific user to complete the review. This option is available regardless of the scope of the review, and the selected reviewers can review users, groups and service principals.
- - **Members (self)** - Use this option to have the users review their own role assignments. This option is only available if the review is scoped to **Users and Groups** or **Users**. For **Azure AD roles**, role-assignable groups won't be a part of the review when this option is selected.
- - **Manager** ΓÇô Use this option to have the userΓÇÖs manager review their role assignment. This option is only available if the review is scoped to **Users and Groups** or **Users**. Upon selecting Manager, you also can specify a fallback reviewer. Fallback reviewers are asked to review a user when the user has no manager specified in the directory. For **Azure AD roles**, role-assignable groups will be reviewed by the fallback reviewer if one is selected.
+ - **Members (self)** - Use this option to have the users review their own role assignments. This option is only available if the review is scoped to **Users and Groups** or **Users**. For **Azure AD roles**, role-assignable groups will not be a part of the review when this option is selected.
+ - **Manager** ΓÇô Use this option to have the userΓÇÖs manager review their role assignment. This option is only available if the review is scoped to **Users and Groups** or **Users**. Upon selecting Manager, you will also have the option to specify a fallback reviewer. Fallback reviewers are asked to review a user when the user has no manager specified in the directory. For **Azure AD roles**, role-assignable groups will be reviewed by the fallback reviewer if one is selected.
### Upon completion settings 1. To specify what happens after a review completes, expand the **Upon completion settings** section.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/upon-completion-settings.png" alt-text="Screenshot of Upon completion settings section to auto apply and should reviewer not respond.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/upon-completion-settings.png" alt-text="Upon completion settings to auto apply and should review not respond screenshot.":::
2. If you want to automatically remove access for users that were denied, set **Auto apply results to resource** to **Enable**. If you want to manually apply the results when the review completes, set the switch to **Disable**.
-3. Use the **If reviewer don't respond** list to specify what happens for users that aren't reviewed by the reviewer within the review period. This setting doesn't impact users who were reviewed by the reviewers.
+3. Use the **If reviewer don't respond** list to specify what happens for users that are not reviewed by the reviewer within the review period. This setting does not impact users who were reviewed by the reviewers.
- **No change** - Leave user's access unchanged - **Remove access** - Remove user's access - **Approve access** - Approve user's access - **Take recommendations** - Take the system's recommendation on denying or approving the user's continued access
-4. Use the **Action to apply on denied guest users** list to specify what happens for guest users that are denied. This setting isn't editable for Azure AD and Azure resource role reviews at this time; guest users, like all users, will always lose access to the resource if denied.
+4. Use the **Action to apply on denied guest users** list to specify what happens for guest users that are denied. This setting is not editable for Azure AD and Azure resource role reviews at this time; guest users, like all users, will always lose access to the resource if denied.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/action-to-apply-on-denied-guest-users.png" alt-text="Screenshot of Action to apply on denied guest users selected.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/action-to-apply-on-denied-guest-users.png" alt-text="Upon completion settings - Action to apply on denied guest users screenshot.":::
-5. You can send notifications to other users or groups to receive review completion updates. This feature allows for stakeholders other than the review creator to be updated on the progress of the review. To use this feature, select **Select User(s) or Group(s)** and add an additional user or group upon you want to receive the status of completion.
+5. You can send notifications to additional users or groups to receive review completion updates. This feature allows for stakeholders other than the review creator to be updated on the progress of the review. To use this feature, select **Select User(s) or Group(s)** and add an additional user or group upon you want to receive the status of completion.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/upon-completion-settings-additional-receivers.png" alt-text="Screenshot of Add additional users to receive notifications selected.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/upon-completion-settings-additional-receivers.png" alt-text="Upon completion settings - Add additional users to receive notifications screenshot.":::
### Advanced settings
-1. To specify extra settings, expand the **Advanced settings** section.
+1. To specify additional settings, expand the **Advanced settings** section.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/advanced-settings.png" alt-text="Screenshot of Advanced settings for show recommendations, require reason on approval, mail notifications, and reminders option.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/advanced-settings.png" alt-text="Advanced settings for show recommendations, require reason on approval, mail notifications, and reminders screenshot.":::
1. Set **Show recommendations** to **Enable** to show the reviewers the system recommendations based the user's access information. Recommendations are based on a 30-day interval period where users who have logged in the past 30 days are recommended access, while users who have not are recommended denial of access. These sign-ins are irrespective of whether they were interactive. The last sign-in of the user is also displayed along with the recommendation.
The need for access to privileged Azure resource and Azure AD roles by employees
1. Set **Mail notifications** to **Enable** to have Azure AD send email notifications to reviewers when an access review starts, and to administrators when a review completes.
-1. Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to reviewers who haven't completed their review.
-1. The content of the email sent to reviewers is auto-generated based on the review details, such as review name, resource name, due date, etc. If you need a way to communicate additional information such as other instructions or contact information, you can specify these details in the **Additional content for reviewer email** which will be included in the invitation and reminder emails sent to assigned reviewers. The highlighted section below is where this information will be displayed.
+1. Set **Reminders** to **Enable** to have Azure AD send reminders of access reviews in progress to reviewers who have not completed their review.
+1. The content of the email sent to reviewers is auto-generated based on the review details, such as review name, resource name, due date, etc. If you need a way to communicate additional information such as additional instructions or contact information, you can specify these details in the **Additional content for reviewer email** which will be included in the invitation and reminder emails sent to assigned reviewers. The highlighted section below is where this information will be displayed.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/email-info.png" alt-text="Screenshot of the content of the email sent to reviewers with highlights.":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/email-info.png" alt-text="Content of the email sent to reviewers with highlights":::
## Manage the access review You can track the progress as the reviewers complete their reviews on the **Overview** page of the access review. No access rights are changed in the directory until the review is completed. Below is a screenshot showing the overview page for **Azure resources** and **Azure AD roles** access reviews. If this is a one-time review, then after the access review period is over or the administrator stops the access review, follow the steps in [Complete an access review of Azure resource and Azure AD roles](pim-complete-azure-ad-roles-and-resource-roles-review.md) to see and apply the results.
-To manage a series of access reviews, navigate to the access review, and you'll find upcoming occurrences in Scheduled reviews, and edit the end date or add/remove reviewers accordingly.
+To manage a series of access reviews, navigate to the access review, and you will find upcoming occurrences in Scheduled reviews, and edit the end date or add/remove reviewers accordingly.
Based on your selections in **Upon completion settings**, auto-apply will be executed after the review's end date or when you manually stop the review. The status of the review will change from **Completed** through intermediate states such as **Applying** and finally to state **Applied**. You should expect to see denied users, if any, being removed from roles in a few minutes. ## Impact of groups assigned to Azure AD roles and Azure resource roles in access reviews -- For **Azure AD roles**, role-assignable groups can be assigned to the role using [role-assignable groups](../roles/groups-concept.md). When a review is created on an Azure AD role with role-assignable groups assigned, by default, the group name shows up in the review without expanding the group membership. The reviewer can approve or deny access of the entire group to the role. Denied groups will lose their assignment to the role when review results are applied.
+ΓÇó For **Azure AD roles**, role-assignable groups can be assigned to the role using [role-assignable groups](../roles/groups-concept.md). When a review is created on an Azure AD role with role-assignable groups assigned, the group name shows up in the review without expanding the group membership. The reviewer can approve or deny access of the entire group to the role. Denied groups will lose their assignment to the role when review results are applied.
-- For **Azure resource roles**, any security group can be assigned to the role. When a review is created on an Azure resource role with a security group assigned, by default, the users assigned to that security group will be fully expanded and shown to the reviewer of the role. When a reviewer denies a user that was assigned to the role via the security group, the user won't be removed from the group, and therefore the apply of the deny result will be unsuccessful.
+ΓÇó For **Azure resource roles**, any security group can be assigned to the role. When a review is created on an Azure resource role with a security group assigned, the users assigned to that security group will be fully expanded and shown to the reviewer of the role. When a reviewer denies a user that was assigned to the role via the security group, the user will not be removed from the group, and therefore the apply of the deny result will be unsuccessful.
> [!NOTE]
-> It's possible for a security group to have other groups assigned to it. In this case, only the users assigned directly to the security group assigned to the role will appear in the review of the role.
-
-These default applications will change if the administrator specifies settings for access reviews of nested groups.
+> It is possible for a security group to have other groups assigned to it. In this case, only the users assigned directly to the security group assigned to the role will appear in the review of the role.
## Update the access review After one or more access reviews have been started, you may want to modify or update the settings of your existing access reviews. Here are some common scenarios that you might want to consider: -- **Adding and removing reviewers** - When updating access reviews, you may choose to add a fallback reviewer in addition to the primary reviewer. Primary reviewers may be removed when updating an access review. However, fallback reviewers aren't removable by design.
+- **Adding and removing reviewers** - When updating access reviews, you may choose to add a fallback reviewer in addition to the primary reviewer. Primary reviewers may be removed when updating an access review. However, fallback reviewers are not removable by design.
> [!Note] > Fallback reviewers can only be added when reviewer type is manager. Primary reviewers can be added when reviewer type is selected user. -- **Reminding the reviewers** - When updating access reviews, you may choose to enable the reminder option under Advanced Settings. Once enabled, users will receive an email notification at the midpoint of the review period, regardless of whether they've completed the review or not.
+- **Reminding the reviewers** - When updating access reviews, you may choose to enable the reminder option under Advanced Settings. Once enabled, users will receive an email notification at the midpoint of the review period, regardless of whether they have completed the review or not.
:::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/reminder-setting.png" alt-text="Screenshot of the reminder option under access reviews settings.":::
active-directory Pim How To Configure Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts.md
Previously updated : 04/18/2022 Last updated : 06/24/2022
# Configure security alerts for Azure AD roles in Privileged Identity Management
-Privileged Identity Management (PIM) generates alerts when there is suspicious or unsafe activity in your Azure Active Directory (Azure AD) organization. When an alert is triggered, it shows up on the Privileged Identity Management dashboard. Select the alert to see a report that lists the users or roles that triggered the alert.
+Privileged Identity Management (PIM) generates alerts when there is suspicious or unsafe activity in your organization in Azure Active Directory (Azure AD), part of Microsoft Entra. When an alert is triggered, it shows up on the Privileged Identity Management dashboard. Select the alert to see a report that lists the users or roles that triggered the alert.
![Screenshot that shows the "Alerts" page with a list of alerts and their severity.](./media/pim-how-to-configure-security-alerts/view-alerts.png)
active-directory Pim How To Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-renew-extend.md
na Previously updated : 10/19/2021 Last updated : 06/24/2022 -- # Extend or renew Azure AD role assignments in Privileged Identity Management
-Azure Active Directory (Azure AD) Privileged Identity Management (PIM) provides controls to manage the access and assignment lifecycle for Azure AD roles. Administrators can assign roles using start and end date-time properties. When the assignment end approaches, Privileged Identity Management sends email notifications to the affected users or groups. It also sends email notifications to Azure AD administrators to ensure that appropriate access is maintained. Assignments might be renewed and remain visible in an expired state for up to 30 days, even if access is not extended.
+Privileged Identity Management (PIM) provides controls to manage the access and assignment lifecycle for roles in Azure Active Directory (Azure AD), part of Microsoft Entra. Administrators can assign roles using start and end date-time properties. When the assignment end approaches, Privileged Identity Management sends email notifications to the affected users or groups. It also sends email notifications to Azure AD administrators to ensure that appropriate access is maintained. Assignments might be renewed and remain visible in an expired state for up to 30 days, even if access is not extended.
## Who can extend and renew?
active-directory Pim How To Require Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-require-mfa.md
Previously updated : 10/07/2021 Last updated : 06/24/2022
We recommend that you require multifactor authentication (MFA or 2FA) for all your administrators. Multifactor authentication reduces the risk of an attack using a compromised password.
-You can require that users complete a multifactor authentication challenge when they sign in. You can also require that users complete a multifactor authentication challenge when they activate a role in Azure Active Directory (Azure AD) Privileged Identity Management (PIM). This way, even if the user didn't complete multifactor authentication when they signed in, they'll be asked to do it by Privileged Identity Management.
+You can require that users complete a multifactor authentication challenge when they sign in. You can also require that users complete a multifactor authentication challenge when they activate a role in Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra. This way, even if the user didn't complete multifactor authentication when they signed in, they'll be asked to do it by Privileged Identity Management.
> [!IMPORTANT] > Right now, Azure AD Multi-Factor Authentication only works with work or school accounts, not Microsoft personal accounts (usually a personal account that's used to sign in to Microsoft services such as Skype, Xbox, or Outlook.com). Because of this, anyone using a personal account can't be an eligible administrator because they can't use multifactor authentication to activate their roles. If these users need to continue managing workloads using a Microsoft account, elevate them to permanent administrators for now.
active-directory Pim How To Use Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-use-audit-log.md
Previously updated : 10/07/2021 Last updated : 06/24/2022
# View audit history for Azure AD roles in Privileged Identity Management
-You can use the Privileged Identity Management (PIM) audit history to see all role assignments and activations within the past 30 days for all privileged roles. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Azure AD logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md). If you want to see the full audit history of activity in your Azure Active Directory (Azure AD) organization, including administrator, end user, and synchronization activity, you can use the [Azure Active Directory security and activity reports](../reports-monitoring/overview-reports.md).
+You can use the Privileged Identity Management (PIM) audit history to see all role assignments and activations within the past 30 days for all privileged roles. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Azure AD logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md). If you want to see the full audit history of activity in your organization in Azure Active Directory (Azure AD), part of Microsoft Entra, including administrator, end user, and synchronization activity, you can use the [Azure Active Directory security and activity reports](../reports-monitoring/overview-reports.md).
Follow these steps to view the audit history for Azure AD roles.
active-directory Pim Perform Azure Ad Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-perform-azure-ad-roles-and-resource-roles-review.md
# Perform an access review of Azure resource and Azure AD roles in PIM
-Privileged Identity Management (PIM) simplifies how enterprises manage privileged access to resources in Azure Active Directory (AD) and other Microsoft online services like Microsoft 365 or Microsoft Intune. Follow the steps in this article to perform reviews of access to roles.
+Privileged Identity Management (PIM) simplifies how enterprises manage privileged access to resources in Azure Active Directory (AD), part of Microsoft Entra, and other Microsoft online services like Microsoft 365 or Microsoft Intune. Follow the steps in this article to perform reviews of access to roles.
If you are assigned to an administrative role, your organization's privileged role administrator may ask you to regularly confirm that you still need that role for your job. You might get an email that includes a link, or you can go straight to the [Azure portal](https://portal.azure.com) and begin.
active-directory Pim Resource Roles Activate Your Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-activate-your-roles.md
na Previously updated : 02/02/2022 Last updated : 06/24/2022
# Activate my Azure resource roles in Privileged Identity Management
-Use Privileged Identity Management (PIM) to allow eligible role members for Azure resources to schedule activation for a future date and time. They can also select a specific activation duration within the maximum (configured by administrators).
+Use Privileged Identity Management (PIM) in Azure Active Diretory (Azure AD), part of Microsoft Entra, to allow eligible role members for Azure resources to schedule activation for a future date and time. They can also select a specific activation duration within the maximum (configured by administrators).
This article is for members who need to activate their Azure resource role in Privileged Identity Management.
active-directory Pim Resource Roles Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-approval-workflow.md
na Previously updated : 10/07/2021 Last updated : 06/24/2022
# Approve or deny requests for Azure resource roles in Privileged Identity Management
-With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), you can configure roles to require approval for activation, and choose users or groups from your Azure AD organization as delegated approvers. We recommend selecting two or more approvers for each role to reduce workload for the privileged role administrator. Delegated approvers have 24 hours to approve requests. If a request is not approved within 24 hours, then the eligible user must re-submit a new request. The 24 hour approval time window is not configurable.
+With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can configure roles to require approval for activation, and choose users or groups from your Azure AD organization as delegated approvers. We recommend selecting two or more approvers for each role to reduce workload for the privileged role administrator. Delegated approvers have 24 hours to approve requests. If a request is not approved within 24 hours, then the eligible user must re-submit a new request. The 24 hour approval time window is not configurable.
Follow the steps in this article to approve or deny requests for Azure resource roles.
active-directory Pim Resource Roles Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md
na Previously updated : 04/18/2022 Last updated : 06/24/2022
# Assign Azure resource roles in Privileged Identity Management
-Azure Active Directory (Azure AD) Privileged Identity Management (PIM) can manage the built-in Azure resource roles, as well as custom roles, including (but not limited to):
+With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, can manage the built-in Azure resource roles, as well as custom roles, including (but not limited to):
- Owner - User Access Administrator
active-directory Pim Resource Roles Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md
na Previously updated : 06/03/2022 Last updated : 06/24/2022
# Configure security alerts for Azure roles in Privileged Identity Management
-Privileged Identity Management (PIM) generates alerts when there is suspicious or unsafe activity in your Azure Active Directory (Azure AD) organization. When an alert is triggered, it shows up on the Alerts page.
+Privileged Identity Management (PIM) generates alerts when there is suspicious or unsafe activity in your organization in Azure Active Directory (Azure AD), part of Microsoft Entra. When an alert is triggered, it shows up on the Alerts page.
![Azure resources - Alerts page listing alert, risk level, and count](media/pim-resource-roles-configure-alerts/rbac-alerts-page.png)
active-directory Pim Resource Roles Configure Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md
na Previously updated : 12/06/2021 Last updated : 06/24/2022
# Configure Azure resource role settings in Privileged Identity Management
-When you configure Azure resource role settings, you define the default settings that are applied to Azure resource role assignments in Azure Active Directory (Azure AD) Privileged Identity Management (PIM). Use the following procedures to configure the approval workflow and specify who can approve or deny requests.
+When you configure Azure resource role settings, you define the default settings that are applied to Azure role assignments in Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra. Use the following procedures to configure the approval workflow and specify who can approve or deny requests.
## Open role settings
active-directory Pim Resource Roles Custom Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-custom-role-policy.md
na Previously updated : 10/07/2021 Last updated : 06/27/2022
# Use Azure custom roles in Privileged Identity Management
-You might need to apply strict Privileged Identity Management (PIM) settings to some users in a privileged role in your Azure Active Directory (Azure AD) organization, while providing greater autonomy for others. Consider for example a scenario in which your organization hires several contract associates to assist in the development of an application that will run in an Azure subscription.
+You might need to apply stricter just-in-time settings to some users in a privileged role in your organization in Azure Active Directory (Azure AD), part of Microsoft Entra, while providing greater autonomy for others. For example, if your organization hired several contract associates to help develop an application that will run in an Azure subscription.
As a resource administrator, you want employees to be eligible for access without requiring approval. However, all contract associates must be approved when they request access to the organization's resources.
-Follow the steps outlined in the next section to set up targeted Privileged Identity Management settings for Azure resource roles.
+Follow the steps outlined in the next section to set up targeted Privileged Identity Management (PIM) settings for Azure resource roles.
## Create the custom role
active-directory Pim Resource Roles Discover Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-discover-resources.md
na Previously updated : 12/07/2021 Last updated : 06/27/2022
# Discover Azure resources to manage in Privileged Identity Management
-Using Azure Active Directory (Azure AD) Privileged Identity Management (PIM), you can improve the protection of your Azure resources. This is helpful to:
+You can use Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, to improve the protection of your Azure resources. This helps:
- Organizations that already use Privileged Identity Management to protect Azure AD roles - Management group and subscription owners who are trying to secure production resources
active-directory Pim Resource Roles Overview Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-overview-dashboards.md
na Previously updated : 10/07/2021 Last updated : 06/27/2022
# Use a resource dashboard to perform an access review in Privileged Identity Management
-You can use a resource dashboard to perform an access review in Privileged Identity Management (PIM). The Admin View dashboard in Azure Active Directory (Azure AD) has three primary components:
+You can use a resource dashboard to perform an access review in Privileged Identity Management (PIM). The Admin View dashboard in Azure Active Directory (Azure AD), part of Microsoft Entra, has three primary components:
- A graphical representation of resource role activations - Charts that display the distribution of role assignments by assignment type
active-directory Pim Resource Roles Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-renew-extend.md
# Extend or renew Azure resource role assignments in Privileged Identity Management
-Azure Active Directory (Azure AD) Privileged Identity Management (PIM) provides controls to manage the access and assignment lifecycle for Azure resources. Administrators can assign roles using start and end date-time properties. When the assignment end approaches, Privileged Identity Management sends email notifications to the affected users or groups. It also sends email notifications to administrators of the resource to ensure that appropriate access is maintained. Assignments might be renewed and remain visible in an expired state for up to 30 days, even if access is not extended.
+Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, provides controls to manage the access and assignment lifecycle for Azure resources. Administrators can assign roles using start and end date-time properties. When the assignment end approaches, Privileged Identity Management sends email notifications to the affected users or groups. It also sends email notifications to administrators of the resource to ensure that appropriate access is maintained. Assignments might be renewed and remain visible in an expired state for up to 30 days, even if access is not extended.
## Who can extend and renew?
active-directory Pim Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-roles.md
Previously updated : 10/07/2021 Last updated : 06/27/2022
# Roles you can't manage in Privileged Identity Management
-Azure Active Directory (Azure AD) Privileged Identity Management (PIM) enables you to manage all [Azure AD roles](../roles/permissions-reference.md) and all [Azure roles](../../role-based-access-control/built-in-roles.md). Azure roles can also include your custom roles attached to your management groups, subscriptions, resource groups, and resources. However, there are few roles that you cannot manage. This article describes the roles you can't manage in Privileged Identity Management.
+You can manage just-in-time assignments to all [Azure AD roles](../roles/permissions-reference.md) and all [Azure roles](../../role-based-access-control/built-in-roles.md) using Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra. Azure roles include built-in and custom roles attached to your management groups, subscriptions, resource groups, and resources. However, there are few roles that you can't manage. This article describes the roles you can't manage in Privileged Identity Management.
## Classic subscription administrator roles
active-directory Pim Security Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-security-wizard.md
Previously updated : 10/07/2021 Last updated : 06/27/2022
# Discovery and Insights (preview) for Azure AD roles (formerly Security Wizard)
-If you're starting out with Privileged Identity Management (PIM) in your Azure Active Directory (Azure AD) organization, you can use the **Discovery and insights (preview)** page to get started. This feature shows you who is assigned to privileged roles in your organization and how to use PIM to quickly change permanent role assignments into just-in-time assignments. You can view or make changes to your permanent privileged role assignments in **Discovery and Insights (preview)**. It's an analysis tool and an action tool.
+If you're starting out using Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, to manage role assignments in your organization, you can use the **Discovery and insights (preview)** page to get started. This feature shows you who is assigned to privileged roles in your organization and how to use PIM to quickly change permanent role assignments into just-in-time assignments. You can view or make changes to your permanent privileged role assignments in **Discovery and Insights (preview)**. It's an analysis tool and an action tool.
## Discovery and insights (preview)
active-directory Pim Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-troubleshoot.md
# Troubleshoot access to Azure resources denied in Privileged Identity Management
-Are you having a problem with Privileged Identity Management (PIM) in Azure Active Directory (Azure AD)? The information that follows can help you to get things working again.
+Are you having a problem with Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsft Entra? The information that follows can help you to get things working again.
## Access to Azure resources denied
active-directory Powershell For Azure Ad Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/powershell-for-azure-ad-roles.md
# PowerShell for Azure AD roles in Privileged Identity Management
-This article contains instructions for using Azure Active Directory (Azure AD) PowerShell cmdlets to manage Azure AD roles in Privileged Identity Management (PIM). It also tells you how to get set up with the Azure AD PowerShell module.
+This article tells you how to use PowerShell cmdlets to manage Azure AD roles using Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra. It also tells you how to get set up with the Azure AD PowerShell module.
## Installation and Setup
active-directory Subscription Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/subscription-requirements.md
na Previously updated : 10/07/2021 Last updated : 06/27/2022
# License requirements to use Privileged Identity Management
-To use Azure Active Directory (Azure AD) Privileged Identity Management (PIM), a directory must have a valid license. Furthermore, licenses must be assigned to the administrators and relevant users. This article describes the license requirements to use Privileged Identity Management.
+To use Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, a tenant must have a valid license. Licenses must also be assigned to the administrators and relevant users. This article describes the license requirements to use Privileged Identity Management.
## Valid licenses
active-directory Alexishr Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alexishr-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **New identity provider** section, perform the following steps:
- ![Screenshot shows the Account Settings.](./media/alexishr-tutorial/account.png " Settings")
+ ![Screenshot shows the Account Settings.](./media/alexishr-tutorial/account.png "Settings")
1. In the **Identity provider SSO URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
active-directory Amazon Managed Grafana Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-managed-grafana-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **Review and create** page, verify all the workspace details and click **Create workspace**.
- ![Screenshot shows review and create page.](./media/amazon-managed-grafana-tutorial/review-workspace.png " Create Workspace")
+ ![Screenshot shows review and create page.](./media/amazon-managed-grafana-tutorial/review-workspace.png "Create Workspace")
1. After creating workspace, click **Complete setup** to complete the SAML configuration.
active-directory Hiretual Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hiretual-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click the **Properties** tab on the left menu bar, copy the value of **User access URL**,and save it on your computer.
- ![Screenshot shows the User access URL.](./media/hiretual-tutorial/access-url.png " SSO Configuration")
+ ![Screenshot shows the User access URL.](./media/hiretual-tutorial/access-url.png "SSO Configuration")
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **SAML2.0 Authentication** page, perform the following steps:
- ![Screenshot shows the SSO Configuration.](./media/hiretual-tutorial/configuration.png " SSO Configuration")
+ ![Screenshot shows the SSO Configuration.](./media/hiretual-tutorial/configuration.png "SSO Configuration")
1. In the **SAML2.O SSO URL** textbox, paste the **User access URL** which you have copied from the Azure portal.
active-directory Tableau Online Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableau-online-provisioning-tutorial.md
Title: 'Tutorial: Configure Tableau Online for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to configure Azure Active Directory to automatically provision and deprovision user accounts to Tableau Online.
+ Title: 'Tutorial: Configure Tableau Cloud for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Tableau Cloud.
+ writer: twimmers-+
+ms.assetid: b4038c18-2bfd-47cb-8e74-3873dc85a796
Last updated 03/27/2019-+
-# Tutorial: Configure Tableau Online for automatic user provisioning
+# Tutorial: Configure Tableau Cloud for automatic user provisioning
++
+This tutorial describes the steps you need to do in both Tableau Cloud and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Tableau Cloud](https://www.tableau.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-This tutorial demonstrates the steps to perform in Tableau Online and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and deprovision users and groups to Tableau Online.
-> [!NOTE]
-> This tutorial describes a connector that's built on top of the Azure AD user provisioning service. For information on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to software-as-a-service (SaaS) applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Tableau Cloud.
+> * Remove users in Tableau Cloud when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Tableau Cloud.
+> * Provision groups and group memberships in Tableau Cloud.
+> * [Single sign-on](tableauonline-tutorial.md) to Tableau Cloud (recommended).
## Prerequisites
-The scenario outlined in this tutorial assumes that you have:
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* An Azure AD tenant.
-* A [Tableau Online tenant](https://www.tableau.com/).
-* A user account in Tableau Online with admin permissions.
-> [!NOTE]
-> The Azure AD provisioning integration relies on the [Tableau Online REST API](https://onlinehelp.tableau.com/current/api/rest_api/en-us/help.htm). This API is available to Tableau Online developers.
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [Tableau Cloud tenant](https://www.tableau.com/).
+* A user account in Tableau Cloud with Admin permission
-## Add Tableau Online from the Azure Marketplace
-Before you configure Tableau Online for automatic user provisioning with Azure AD, add Tableau Online from the Azure Marketplace to your list of managed SaaS applications.
+> [!NOTE]
+> The Azure AD provisioning integration relies on the [Tableau Cloud REST API](https://onlinehelp.tableau.com/current/api/rest_api/en-us/help.htm). This API is available to Tableau Cloud developers.
-To add Tableau Online from the Marketplace, follow these steps.
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Tableau Cloud](../app-provisioning/customize-application-attributes.md).
-1. In the [Azure portal](https://portal.azure.com), in the navigation pane on the left, select **Azure Active Directory**.
+## Step 2. Configure Tableau Cloud to support provisioning with Azure AD
- ![The Azure Active Directory icon](common/select-azuread.png)
+Use the following steps to enable SCIM support with Azure Active Directory:
+1. The SCIM functionality requires that you configure your site to support SAML single sign-on. If you have not done this yet, complete the following sections in [Configure SAML with Azure Active Directory](https://help.tableau.com/current/online/en-us/saml_config_azure_ad.htm):
+ * Step 1: [Open the Tableau Cloud SAML Settings](https://help.tableau.com/current/online/en-us/saml_config_azure_ad.htm#open-the-tableau-online-saml-settings).
+ * Step 2: [Add Tableau Cloud to your Azure Active Directory applications](https://help.tableau.com/current/online/en-us/saml_config_azure_ad.htm#add-tableau-online-to-your-azure-ad-applications).
+
+ > [!NOTE]
+ > If you donΓÇÖt set up SAML single sign-on, your user will be unable to sign into Tableau Cloud after they have been provisioned unless you manually change the userΓÇÖs authentication method from SAML to Tableau or Tableau MFA in Tableau Cloud.
-2. Go to **Enterprise applications**, and then select **All applications**.
+1. In Tableau Cloud, navigate to **Settings > Authentication** page, then under **Automatic Provisioning and Group Synchronization (SCIM)**, select the **Enable SCIM** check box. This populates the **Base URL** and **Secret** boxes with values you will use in the SCIM configuration of your IdP.
+ > [!NOTE]
+ > The secret token is displayed only immediately after it is generated. If you lose it before you can apply it to Azure Active Directory, you can select **Generate New Secret**. In addition, the secret token is tied to the Tableau Cloud user account of the site administrator who enables SCIM support. If that userΓÇÖs site role changes or the user is removed from the site, the secret token becomes invalid, and another site administrator must generate a new secret token and apply it to Azure Active Directory.
- ![The Enterprise applications blade](common/enterprise-applications.png)
-3. To add a new application, select **New application** at the top of the dialog box.
- ![The New application button](common/add-new-app.png)
+## Step 3. Add Tableau Cloud from the Azure AD application gallery
-4. In the search box, enter **Tableau Online** and select **Tableau Online** from the result panel. To add the application, select **Add**.
+Add Tableau Cloud from the Azure AD application gallery to start managing provisioning to Tableau Cloud. If you have previously setup Tableau Cloud for SSO, you can use the same application. However it's recommended you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
- ![Tableau Online in the results list](common/search-new-app.png)
-## Assign users to Tableau Online
+## Step 4. Define who will be in scope for provisioning
-Azure Active Directory uses a concept called *assignments* to determine which users should receive access to selected apps. In the context of automatic user provisioning, only the users or groups that were assigned to an application in Azure AD are synchronized.
-Before you configure and enable automatic user provisioning, decide which users or groups in Azure AD need access to Tableau Online. To assign these users or groups to Tableau Online, follow the instructions in [Assign a user or group to an enterprise app](../manage-apps/assign-user-or-group-access-portal.md).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user and group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-### Important tips for assigning users to Tableau Online
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* We recommend that you assign a single Azure AD user to Tableau Online to test the automatic user provisioning configuration. You can assign additional users or groups later.
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-* When you assign a user to Tableau Online, select any valid application-specific role, if available, in the assignment dialog box. Users with the **Default Access** role are excluded from provisioning.
-## Configure automatic user provisioning to Tableau Online
+## Step 5. Configure automatic user provisioning to Tableau Cloud
-This section guides you through the steps to configure the Azure AD provisioning service. Use it to create, update, and disable users or groups in Tableau Online based on user or group assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Tableau Cloud based on user and group assignments in Azure AD.
> [!TIP]
-> You also can enable SAML-based Single Sign-On for Tableau Online. Follow the instructions in the [Tableau Online single sign-on tutorial](tableauonline-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, although these two features complement each other.
+> You must enable SAML-based single sign-on for Tableau Cloud. Follow the instructions in the [Tableau Cloud single sign-on tutorial](tableauonline-tutorial.md). If SAML isn't enabled, then the user that is provisioned will not be able to sign in.
-### Configure automatic user provisioning for Tableau Online in Azure AD
+### To configure automatic user provisioning for Tableau Cloud in Azure AD:
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise applications** > **All applications** > **Tableau Online**.
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
![Enterprise applications blade](common/enterprise-applications.png)
-2. In the applications list, select **Tableau Online**.
+1. In the applications list, select **Tableau Cloud**.
+
+ ![The Tableau Cloud link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
++
+ ![Provisioning tab](common/provisioning.png)
++
+1. Set the **Provisioning Mode** to **Automatic**.
+
- ![The Tableau Online link in the applications list](common/all-applications.png)
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
-3. Select the **Provisioning** tab.
+1. In the **Admin Credentials** section, input your Tableau Cloud Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Tableau Cloud. If the connection fails, ensure your Tableau Cloud account has Admin permissions and try again.
- ![Tableau Online Provisioning](./media/tableau-online-provisioning-tutorial/ProvisioningTab.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+ ![Token](media/tableau-online-provisioning-tutorial/tableau-test-connections.png)
- ![Tableau Online Provisioning Mode](./media/tableau-online-provisioning-tutorial/ProvisioningCredentials.png)
-5. Under the **Admin Credentials** section, input the domain, admin username, admin password, and content URL of your Tableau Online account:
+ > [!NOTE]
+ > You will have 2 options for your Authentication Method: **Bearer Authentication** and **Basic Authentication**. Make sure that you select Bearer Authentication. Basic authentication will not work for the SCIM 2.0 endpoint.
- * In the **Domain** box, fill in the subdomain based on Step 6.
- * In the **Admin Username** box, fill in the username of the admin account on your Tableau Online Tenant. An example is admin@contoso.com.
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
- * In the **Admin Password** box, fill in the password of the admin account that corresponds to the admin username.
+ ![Notification Email](common/provisioning-notification-email.png)
- * In the **Content URL** box, fill in the subdomain based on Step 6.
-6. After you sign in to your administrative account for Tableau Online, you can get the values for **Domain** and **Content URL** from the URL of the admin page.
+1. Select **Save**.
- * The **Domain** for your Tableau Online account can be copied from this part of the URL:
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Tableau Cloud**.
- ![Tableau Online Domain](./media/tableau-online-provisioning-tutorial/DomainUrlPart.png)
+1. Review the user attributes that are synchronized from Azure AD to Tableau Cloud in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Tableau Cloud for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Tableau Cloud API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- * The **Content URL** for your Tableau Online account can be copied from this section. It's a value that's defined during account setup. In this example, the value is "contoso":
+ |Attribute|Type|Supported for filtering|Required by Tableau Cloud|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |roles|String||
- ![Tableau Online Content URL](./media/tableau-online-provisioning-tutorial/ContentUrlPart.png)
+ > [!NOTE]
+ > The displayName attribute in Tableau Cloud will be mapped to the userPrincipalName attribute in Azure AD. When a provisioned user signs into Azure AD for the first time, they will be asked to create an account where they will need to enter in a first name and last name. Tableau Cloud will automatically update the value of the displayName field based on the first name and last name values provided by the provisioned user. Therefore, the displayName you see in Azure AD may have differences with the displayName that appears in Tableau Cloud based on the userΓÇÖs input.
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Tableau Cloud**.
- > [!NOTE]
- > Your **Domain** might be different from the one shown here.
-7. After you fill in the boxes shown in Step 5, select **Test Connection** to make sure that Azure AD can connect to Tableau Online. If the connection fails, make sure your Tableau Online account has admin permissions and try again.
+1. Review the group attributes that are synchronized from Azure AD to Tableau Cloud in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Tableau Cloud for update operations. Select the **Save** button to commit any changes.
- ![Tableau Online Test Connection](./media/tableau-online-provisioning-tutorial/TestConnection.png)
-8. In the **Notification Email** box, enter the email address of the person or group to receive the provisioning error notifications. Select the **Send an email notification when a failure occurs** check box.
+ |Attribute|Type|Supported for filtering|Required by Tableau Cloud|
+ |||||
+ |displayName|String|&check;
+ |members|Reference|
- ![Tableau Online Notification Email](./media/tableau-online-provisioning-tutorial/EmailNotification.png)
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-9. Select **Save**.
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Tableau**.
+1. To enable the Azure AD provisioning service for Tableau Cloud, change the **Provisioning Status** to **On** in the **Settings** section.
- ![Tableau Online user synchronization](./media/tableau-online-provisioning-tutorial/UserMappings.png)
-11. Review the user attributes that are synchronized from Azure AD to Tableau Online in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the user accounts in Tableau Online for update operations. To save any changes, select **Save**.
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
- ![Tableau Online matching user attributes](./media/tableau-online-provisioning-tutorial/attribute.png)
-12. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Tableau**.
+1. Define the users and groups that you would like to provision to Tableau Cloud by choosing the desired values in **Scope** in the **Settings** section.
- ![Tableau Online group synchronization](./media/tableau-online-provisioning-tutorial/GroupMappings.png)
+ ![Provisioning Scope](common/provisioning-scope.png)
-13. Review the group attributes that are synchronized from Azure AD to Tableau Online in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the user accounts in Tableau Online for update operations. To save any changes, select **Save**.
- ![Tableau Online matching group attributes](./media/tableau-online-provisioning-tutorial/GroupAttributeMapping.png)
+1. When you're ready to provision, click **Save**.
-14. To configure scoping filters, follow the instructions in the [scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-15. To enable the Azure AD provisioning service for Tableau Online, in the **Settings** section, change **Provisioning Status** to **On**.
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to complete than next cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
- ![Tableau Online Provisioning Status](./media/tableau-online-provisioning-tutorial/ProvisioningStatus.png)
-16. Define the users or groups that you want to provision to Tableau Online. In the **Settings** section, select the values you want in **Scope**.
+### Recommendations
+Tableau Cloud will only store the highest privileged role that is assigned to a user. In other words, if a user is assigned to two groups, the userΓÇÖs role will reflect the highest privileged role.
- ![Tableau Online Scope](./media/tableau-online-provisioning-tutorial/ScopeSync.png)
-17. When you're ready to provision, select **Save**.
+To keep track of role assignments, you can create two purpose-specific groups for role assignments. For example, you can create groups such as Tableau ΓÇô Creator, and Tableau ΓÇô Explorer, etc. Assignment would then look like:
+* Tableau ΓÇô Creator: Creator
+* Tableau ΓÇô Explorer: Explorer
+* Etc.
- ![Tableau Online Save](./media/tableau-online-provisioning-tutorial/SaveProvisioning.png)
+Once provisioning is set up, you will want to edit role changes directly in Azure Active Directory. Otherwise, you may end up with role inconsistencies between Tableau Cloud and Azure Active Directory.
-This operation starts the initial synchronization of all users or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than later syncs. They occur approximately every 40 minutes as long as the Azure AD provisioning service runs.
+### Valid Tableau site role values
+On the **Select a Role** page in your Azure Active Directory portal, the Tableau Site Role values that are valid include the following: **Creator, SiteAdministratorCreator, Explorer, SiteAdministratorExplorer, ExplorerCanPublish, Viewer, or Unlicensed**.
-You can use the **Synchronization Details** section to monitor progress and follow links to the provisioning activity report. The report describes all the actions performed by the Azure AD provisioning service on Tableau Online.
-For information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
+If you select a role that is not in the above list, such as a legacy (pre-v2018.1) role, you will experience an error.
++
+### Update a Tableau Cloud application to use the Tableau Cloud SCIM 2.0 endpoint
-## Update a Tableau Cloud application to use the Tableau Cloud SCIM 2.0 endpoint
In June 2022, Tableau released a SCIM 2.0 connector. Completing the steps below will update applications configured to use the Tableau API endpoint to the use the SCIM 2.0 endpoint. These steps will remove any customizations previously made to the Tableau Cloud application, including:+ * Authentication details * Scoping filters * Custom attribute mappings
+>[!Note]
+>Be sure to note any changes that have been made to the settings listed above before completing the steps below. Failure to do so will result in the loss of customized settings.
++
+1. Sign into the [Azure portal](https://portal.azure.com).
++
+1. Navigate to your current Tableau Cloud app under **Azure Active Directory > Enterprise Applications**.
-> [!NOTE]
-> Be sure to note any changes that have been made to the settings listed above before completing the steps below. Failure to do so will result in the loss of customized settings.
-1. Sign into the Azure portal at https://portal.azure.com
-2. Navigate to your current Tableau Cloud app under Azure Active Directory > Enterprise Applications
-3. In the Properties section of your new custom app, copy the Object ID.
+1. In the Properties section of your new custom app, copy the **Object ID**.
- ![Screenshot of Tableau Cloud app in the Azure portal.](./media/tableau-online-provisioning-tutorial/tableau-cloud-properties.png)
-4. In a new web browser window, go to https://developer.microsoft.com/graph/graph-explorer and sign in as the administrator for the Azure AD tenant where your app is added.
+ ![Screenshot of Tableau Cloud app in the Azure portal.](media/tableau-online-provisioning-tutorial/tableau-cloud-properties.png)
- ![Screenshot of Microsoft Graph explorer sign in page.](./media/workplace-by-facebook-provisioning-tutorial/permissions.png)
-5. Check to make sure the account being used has the correct permissions. The permission ΓÇ£Directory.ReadWrite.AllΓÇ¥ is required to make this change.
+1. In a new web browser window, navigate to `https://developer.microsoft.com/graph/graph-explorer` and sign in as the administrator for the Azure AD tenant where your app is added.
- ![Screenshot of Microsoft Graph settings option.](./media/workplace-by-facebook-provisioning-tutorial/permissions-2.png)
+ ![Screenshot of Microsoft Graph explorer sign in page.](media/tableau-online-provisioning-tutorial/tableau-graph-explorer-signin.png)
+
- ![Screenshot of Microsoft Graph permissions.](./media/workplace-by-facebook-provisioning-tutorial/permissions-3.png)
+1. Check to make sure the account being used has the correct permissions. The permission **Directory.ReadWrite.All** is required to make this change.
-6. Using the ObjectID selected from the app previously, run the following command:
+ ![Screenshot of Microsoft Graph settings option.](media/tableau-online-provisioning-tutorial/tableau-graph-settings.png)
-```
-GET https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs/
-```
+ ![Screenshot of Microsoft Graph permissions.](media/tableau-online-provisioning-tutorial/tableau-graph-permissions.png)
-7. Taking the "id" value from the response body of the GET request from above, run the command below, replacing "[job-id]" with the id value from the GET request. The value should have the format of "Tableau.xxxxxxxxxxxxxxx.xxxxxxxxxxxxxxx":
-```
-DELETE https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs/[job-id]
-```
-8. In the Graph Explorer, run the command below. Replace "[object-id]" with the service principal ID (object ID) copied from the third step.
-```
-POST https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs { "templateId": "TableauOnlineSCIM" }
-```
+1. Using the ObjectID selected from the app previously, run the following command:
-![Screenshot of Microsoft Graph request.](./media/tableau-online-provisioning-tutorial/tableau-cloud-graph.png)
+ `GET https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs/`
-9. Return to the first web browser window and select the Provisioning tab for your application. Your configuration will have been reset. You can confirm the upgrade has taken place by confirming the Job ID starts with ΓÇ£TableauOnlineSCIMΓÇ¥.
+1. Taking the "id" value from the response body of the GET request from above, run the command below, replacing "[job-id]" with the id value from the GET request. The value should have the format of "Tableau.xxxxxxxxxxxxxxx.xxxxxxxxxxxxxxx":
-10. Under the Admin Credentials section, select "Bearer Authentication" as the authentication method and enter the Tenant URL and Secret Token of the Tableau instance you wish to provision to.
-![Screenshot of Admin Credentials in Tableau Cloud in the Azure portal.](./media/tableau-online-provisioning-tutorial/tableau-cloud-creds.png)
+ `DELETE https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs/[job-id]`
-11. Restore any previous changes you made to the application (Authentication details, Scoping filters, Custom attribute mappings) and re-enable provisioning.
+1. In the Graph Explorer, run the command below. Replace "[object-id]" with the service principal ID (object ID) copied from the third step.
-> [!NOTE]
-> Failure to restore the previous settings may results in attributes (name.formatted for example) updating in Workplace unexpectedly. Be sure to check the configuration before enabling provisioning
+ `POST https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs { "templateId": "TableauOnlineSCIM" }`
+
+ ![Screenshot of Microsoft Graph request.](media/tableau-online-provisioning-tutorial/tableau-cloud-graph.png)
+
+1. Return to the first web browser window and select the Provisioning tab for your application. Your configuration will have been reset. You can confirm the upgrade has taken place by confirming the Job ID starts with **TableauOnlineSCIM**.
+
+1. Under the Admin Credentials section, select "Bearer Authentication" as the authentication method and enter the Tenant URL and Secret Token of the Tableau instance you wish to provision to.
+ ![Screenshot of Admin Credentials in Tableau Cloud in the Azure portal.](media/tableau-online-provisioning-tutorial/tableau-cloud-creds.png)
+
+1. Restore any previous changes you made to the application (Authentication details, Scoping filters, Custom attribute mappings) and re-enable provisioning.
+
+>[!Note]
+>Failure to restore the previous settings may results in attributes (name.formatted for example) updating in Workplace unexpectedly. Be sure to check the configuration before enabling provisioning
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Change log * 09/30/2020 - Added support for attribute "authSetting" for Users.
+* 06/24/2022 - Updated the ap to be SCIM 2.0 compliant.
-## Additional resources
+## More resources
-* [Manage user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps * [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)-
-<!--Image references-->
-[1]: ./media/tableau-online-provisioning-tutorial/tutorial_general_01.png
-[2]: ./media/tableau-online-provisioning-tutorial/tutorial_general_02.png
-[3]: ./media/tableau-online-provisioning-tutorial/tutorial_general_03.png
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
Previously updated : 06/16/2022 Last updated : 06/27/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
The following diagram illustrates the Azure AD Verifiable Credentials architectu
[Azure Key Vault](../../key-vault/general/basic-concepts.md) is a cloud service that enables the secure storage and access of secrets and keys. The Verifiable Credentials service stores public and private keys in Azure Key Vault. These keys are used to sign and verify credentials.
-If you don't have an Azure Key Vault instance available, follow [these steps](/key-vault/general/quick-create-portal.md) to create a key vault using the Azure portal.
+If you don't have an Azure Key Vault instance available, follow [these steps](../../key-vault/general/quick-create-portal.md) to create a key vault using the Azure portal.
>[!NOTE] >By default, the account that creates a vault is the only one with access. The Verifiable Credentials service needs access to the key vault. You must configure the key vault with an access policy that allows the account used during configuration to create and delete keys. The account used during configuration also requires permission to sign to create the domain binding for Verifiable Credentials. If you use the same account while testing, modify the default policy to grant the account sign permission, in addition to the default permissions granted to vault creators.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Previously updated : 06/24/2022 Last updated : 06/27/2022
This article lists the latest features, improvements, and changes in the Microso
## June 2022
-In June, we introduced a set of new preview features:
-- Web as a new, default, trust system that users' can choose when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verifiable-credentials) a tenant. Web means your tenant uses [did:web](https://w3c-ccg.github.io/did-method-web/) as its trust system. ION is still available.-- [Quickstarts](how-to-use-quickstart.md) as a new way to create Managed Credentials. Managed Credentials no longer use of Azure Storage to store the Display & Rules JSON definitions. You need to migrate your Azure Storage based credentials to become Managed Credentials and we'll provide instructions shortly.-- Managed Credential [Quickstart for Verifiable Credentials of type VerifiedEmployee](how-to-use-quickstart-verifiedemployee.md) with directory based claims from your tenant.-- Updated documentation that describes the different ways to use the [Quickstarts](how-to-use-quickstart.md) and a [Rules and Display definition model](rules-and-display-definitions-model.md).
+- We are adding support for the [did:web](https://w3c-ccg.github.io/did-method-web/) method. Any new tenant that starts using the Verifiable Credentials Service after June 14, 2022 will have Web as a new, default, trust system when [onboarding](verifiable-credentials-configure-tenant.md#set-up-verifiable-credentials). VC Administrators can still choose to use ION when setting a tenant. If you want to use did:web instead of ION or viceversa, you will need to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reset-the-azure-ad-verifiable-credentials-service).
+- We are rolling out several features to improve the overall experience of creating verifiable credentials in the Entra Verified ID platform:
+ - Introducing Managed Credentials, Managed Credentials are verifiable credentials that no longer use of Azure Storage to store the [display & rules JSON definitions](rules-and-display-definitions-model.md). Their display and rule definitions are different from earlier versions.
+ - Create Managed Credentials using the [new quickstart experience](how-to-use-quickstart.md).
+ - Administrators can create a Verified Employee Managed Credential using the [new quick start](how-to-use-quickstart-verifiedemployee.md). The Verified Employee is a verifiable credential of type verifiedEmployee that is based on a pre-defined set of claims from your tenant's Azure Active Directory.
+
+>[!IMPORTANT]
+> You need to migrate your Azure Storage based credentials to become Managed Credentials. We'll soon provide migration instructions.
+
+- We made the following updates to our docs:
+ - (new) [Current supported open standards for Microsoft Entra Verified ID](verifiable-credentials-standards.md).
+ - (new) [How to create verifiable credentials for ID token hint](how-to-use-quickstart.md).
+ - (new) [How to create verifiable credentials for ID token](how-to-use-quickstart-idtoken.md).
+ - (new) [How to create verifiable credentials for self-asserted claims](how-to-use-quickstart-selfissued.md).
+ - (new) [Rules and Display definition model specification](rules-and-display-definitions-model.md).
+ - (new) [Creating an Azure AD tenant for development](how-to-create-a-free-developer-account.md).
## May 2022
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
We analyzed your VMware Cloud Simple usage over last 30 days and calculated rese
Learn more about [Subscription - VMwareCloudSimpleReservedCapacity (Consider VMware Cloud Simple reserved instance )](../cost-management-billing/reservations/reserved-instance-purchase-recommendations.md).
+## Subscription
+ ### Use Virtual Machines with Ephemeral OS Disk enabled to save cost and get better performance With Ephemeral OS Disk, Customers get these benefits: Save on storage cost for OS disk. Get lower read/write latency to OS disk. Faster VM Reimage operation by resetting OS (and Temporary disk) to its original state. It is more preferable to use Ephemeral OS Disk for short-lived IaaS VMs or VMs with stateless workloads
aks Api Server Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md
+
+ Title: API Server VNet Integration in Azure Kubernetes Service (AKS)
+description: Learn how to create an Azure Kubernetes Service (AKS) cluster with API Server VNet Integration
++ Last updated : 06/27/2022++++
+# Create an Azure Kubernetes Service cluster with API Server VNet Integration (PREVIEW)
+
+An Azure Kubernetes Service (AKS) cluster with API Server VNet Integration configured projects the API server endpoint directly into a delegated subnet in the VNet where AKS is deployed. This enables network communication between the API server and the cluster nodes without any required private link or tunnel. The API server will be available behind an Internal Load Balancer VIP in the delegated subnet, which the nodes will be configured to utilize. By using API Server VNet Integration, you can ensure network traffic between your API server and your node pools remains on the private network only.
++++
+## API server connectivity
+
+The control plane or API server is in an Azure Kubernetes Service (AKS)-managed Azure subscription. A customer's cluster or node pool is in the customer's subscription. The server and the virtual machines that make up the cluster nodes can communicate with each other through the API server VIP and pod IPs that are projected into the delegated subnet.
+
+At this time, API Server VNet integration is only supported for private clusters. Unlike standard public clusters, the agent nodes communicate directly with the private IP address of the ILB VIP for communication to the API server without using DNS. External clients needing to communicate with the cluster should follow the same private DNS setup methodology as standard [private clusters](private-clusters.md).
+
+## Region availability
+
+API Server VNet Integration is available in the following regions at this time:
+
+- canary regions
+- eastus2
+- northcentralus
+- westcentralus
+- westus2
+
+## Prerequisites
+
+* Azure CLI with aks-preview extension 0.5.67 or later.
+* If using ARM or the REST API, the AKS API version must be 2022-04-02-preview or later.
+
+### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `EnableAPIServerVnetIntegrationPreview` preview feature
+
+To create an AKS cluster with API Server VNet Integration, you must enable the `EnableAPIServerVnetIntegrationPreview` feature flag on your subscription.
+
+Register the `EnableAPIServerVnetIntegrationPreview` feature flag by using the `az feature register` command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "EnableAPIServerVnetIntegrationPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the `az feature list` command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableAPIServerVnetIntegrationPreview')].{Name:name,State:properties.state}"
+```
+
+When the feature has been registered, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Create an AKS cluster with API Server VNet Integration using Managed VNet
+
+AKS clusters with API Server VNet Integration can be configured in either managed VNet or bring-your-own VNet mode.
+
+### Create a resource group
+
+Create a resource group or use an existing resource group for your AKS cluster.
+
+```azurecli-interactive
+az group create -l westus2 -n <resource-group>
+```
+
+### Deploy the cluster
+
+```azurecli-interactive
+az aks create -n <cluster-name> \
+ -g <resource-group> \
+ -l <location> \
+ --network-plugin azure \
+ --enable-private-cluster \
+ --enable-apiserver-vnet-integration
+```
+
+Where `--enable-private-cluster` is a mandatory flag for a private cluster, and `--enable-apiserver-vnet-integration` configures API Server VNet integration for Managed VNet mode.
+
+## Create an AKS cluster with API Server VNet Integration using bring-your-own VNet
+
+When using bring-your-own VNet, an API server subnet must be created and delegated to `Microsoft.ContainerService/managedClusters`. This grants the AKS service permissions to inject the API server pods and internal load balancer into that subnet. The subnet may not be used for any other workloads, but may be used for multiple AKS clusters located in the same virtual network. An AKS cluster will require from 2-7 IP addresses depending on cluster scale. The minimum supported API server subnet size is a /28.
+
+Note that the cluster identity needs permissions to both the API server subnet and the node subnet. Lack of permissions at the API server subnet will cause a provisioning failure.
+
+> [!WARNING]
+> Running out of IP addresses may prevent API server scaling and cause an API server outage.
+
+### Create a resource group
+
+Create a resource group or use an existing resource group for your AKS cluster.
+
+```azurecli-interactive
+az group create -l <location> -n <resource-group>
+```
+
+### Create a virtual network
+
+```azurecli-interactive
+# Create the virtual network
+az network vnet create -n <vnet-name> \
+ -l <location> \
+ --address-prefixes 172.19.0.0/16
+
+# Create an API server subnet
+az network vnet subnet create --vnet-name <vnet-name> \
+ --name <apiserver-subnet-name> \
+ --delegations Microsoft.ContainerService/managedClusters \
+ --address-prefixes 172.19.0.0/28
+
+# Create a cluster subnet
+az network vnet subnet create --vnet-name <vnet-name> \
+ --name <cluster-subnet-name> \
+ --address-prefixes 172.19.1.0/24
+```
+
+### Create a managed identity and give it permissions on the virtual network
+
+```azurecli-interactive
+# Create the identity
+az identity create -n <managed-identity-name> -l <location>
+
+# Assign Network Contributor to the API server subnet
+az role assignment create --scope <apiserver-subnet-resource-id> \
+ --role "Network Contributor" \
+ --assignee <managed-identity-client-id>
+
+# Assign Network Contributor to the cluster subnet
+az role assignment create --scope <cluster-subnet-resource-id> \
+ --role "Network Contributor" \
+ --assignee <managed-identity-client-id>
+```
+
+### Create the AKS cluster
+
+```azurecli-interactive
+az aks create -n <cluster-name> \
+ -g <resource-group> \
+ -l <location> \
+ --network-plugin azure \
+ --enable-private-cluster \
+ --enable-apiserver-vnet-integration \
+ --vnet-subnet-id <cluster-subnet-resource-id> \
+ --apiserver-subnet-id <apiserver-subnet-resource-id> \
+ --assign-identity <managed-identity-resource-id>
+```
+
+## Limitations
+* Existing AKS clusters cannot be converted to API Server VNet Integration clusters at this time.
+* Only [private clusters](private-clusters.md) are supported at this time.
+* [Private Link Service][private-link-service] will not work if deployed against the API Server injected addresses at this time, so the API server cannot be exposed to other virtual networks via private link. To access the API server from outside the cluster network, utilize either [VNet peering][virtual-network-peering] or [AKS run command][command-invoke].
+
+<!-- LINKS - internal -->
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-feature-list]: /cli/azure/feature#az_feature_list
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-extension-update]: /cli/azure/extension#az_extension_update
+[private-link-service]: ../private-link/private-link-service-overview.md#limitations
+[private-endpoint-service]: ../private-link/private-endpoint-overview.md
+[virtual-network-peering]: ../virtual-network/virtual-network-peering-overview.md
+[azure-bastion]: ../bastion/tutorial-create-host-portal.md
+[express-route-or-vpn]: ../expressroute/expressroute-about-virtual-network-gateways.md
+[devops-agents]: /azure/devops/pipelines/agents/agents
+[availability-zones]: availability-zones.md
+[command-invoke]: command-invoke.md
+[container-registry-private-link]: ../container-registry/container-registry-private-link.md
+[virtual-networks-name-resolution]: ../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
description: Learn what ports and addresses are required to control egress traff
Previously updated : 06/15/2022 Last updated : 06/27/2022 #Customer intent: As an cluster operator, I want to restrict egress traffic for nodes to only access defined ports and addresses and improve cluster security.
Now an AKS cluster can be deployed into the existing virtual network. We'll also
![aks-deploy](media/limit-egress-traffic/aks-udr-fw.png)
-### Create a service principal with access to provision inside the existing virtual network
-
-A cluster identity (managed identity or service principal) is used by AKS to create cluster resources. A service principal that is passed at create time is used to create underlying AKS resources such as Storage resources, IPs, and Load Balancers used by AKS (you may also use a [managed identity](use-managed-identity.md) instead). If not granted the appropriate permissions below, you won't be able to provision the AKS Cluster.
-
-```azurecli
-# Create SP and Assign Permission to Virtual Network
-
-az ad sp create-for-rbac -n "${PREFIX}sp"
-```
-
-Now replace the `APPID` and `PASSWORD` below with the service principal appid and service principal password autogenerated by the previous command output. We'll reference the VNET resource ID to grant the permissions to the service principal so AKS can deploy resources into it.
-
-```azurecli
-APPID="<SERVICE_PRINCIPAL_APPID_GOES_HERE>"
-PASSWORD="<SERVICEPRINCIPAL_PASSWORD_GOES_HERE>"
-VNETID=$(az network vnet show -g $RG --name $VNET_NAME --query id -o tsv)
-
-# Assign SP Permission to VNET
-
-az role assignment create --assignee $APPID --scope $VNETID --role "Network Contributor"
-```
-
-You can check the detailed permissions that are required [here](kubernetes-service-principal.md#delegate-access-to-other-azure-resources).
-
-> [!NOTE]
-> If you're using the kubenet network plugin, you'll need to give the AKS service principal or managed identity permissions to the pre-created route table, since kubenet requires a route table to add neccesary routing rules.
-> ```azurecli-interactive
-> RTID=$(az network route-table show -g $RG -n $FWROUTE_TABLE_NAME --query id -o tsv)
-> az role assignment create --assignee $APPID --scope $RTID --role "Network Contributor"
-> ```
-
-### Deploy AKS
-
-Finally, the AKS cluster can be deployed into the existing subnet we've dedicated for the cluster. The target subnet to be deployed into is defined with the environment variable, `$SUBNETID`. We didn't define the `$SUBNETID` variable in the previous steps. To set the value for the subnet ID, you can use the following command:
+The target subnet to be deployed into is defined with the environment variable, `$SUBNETID`. We didn't define the `$SUBNETID` variable in the previous steps. To set the value for the subnet ID, you can use the following command:
```azurecli SUBNETID=$(az network vnet subnet show -g $RG --vnet-name $VNET_NAME --name $AKSSUBNET_NAME --query id -o tsv)
You'll define the outbound type to use the UDR that already exists on the subnet
```azurecli az aks create -g $RG -n $AKSNAME -l $LOC \
- --node-count 3 --generate-ssh-keys \
+ --node-count 3 \
--network-plugin $PLUGIN \ --outbound-type userDefinedRouting \
- --service-cidr 10.41.0.0/16 \
- --dns-service-ip 10.41.0.10 \
- --docker-bridge-address 172.17.0.1/16 \
--vnet-subnet-id $SUBNETID \
- --service-principal $APPID \
- --client-secret $PASSWORD \
--api-server-authorized-ip-ranges $FWPUBLIC_IP ```
+> [!NOTE]
+> For creating and using your own VNet and route table where the resources are outside of the worker node resource group, the CLI will add the role assignment automatically. If you are using an ARM template or other client, you need to use the Principal ID of the cluster managed identity to perform a [role assignment.][add role to identity]
+>
+> If you are not using the CLI but using your own VNet or route table which are outside of the worker node resource group, it's recommended to use [user-assigned control plane identity][Bring your own control plane managed identity]. For system-assigned control plane identity, we cannot get the identity ID before creating cluster, which causes delay for role assignment to take effect.
++ ### Enable developer access to the API server If you used authorized IP ranges for the cluster on the previous step, you must add your developer tooling IP addresses to the AKS cluster list of approved IP ranges in order to access the API server from there. Another option is to configure a jumpbox with the needed tooling inside a separate subnet in the Firewall's virtual network.
If you want to restrict how pods communicate between themselves and East-West tr
[aks-support-policies]: support-policies.md [aks-faq]: faq.md [aks-private-clusters]: private-clusters.md
+[add role to identity]: use-managed-identity.md#add-role-assignment-for-control-plane-identity
+[Bring your own control plane managed identity]: use-managed-identity.md#bring-your-own-control-plane-managed-identity
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-cluster.md
You can use the [Stop-AzAksCluster][stop-azakscluster] cmdlet to stop a running
Stop-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup ```
-You can verify your cluster is stopped using the [Get-AzAksCluster][get-azakscluster] cmdlet and confirming the `ProvisioningState` shows as `Stopped` as shown in the following output:
+You can verify your cluster is stopped using the [Get-AzAksCluster][get-azakscluster] cmdlet and confirming the `ProvisioningState` shows as `Succeeded` as shown in the following output:
```Output
-ProvisioningState : Stopped
+ProvisioningState : Succeeded
MaxAgentPools : 100 KubernetesVersion : 1.20.7 ...
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/troubleshooting.md
To help diagnose the issue run `az aks show -g myResourceGroup -n myAKSCluster -
* If cluster is actively upgrading, wait until the operation finishes. If it succeeded, retry the previously failed operation again. * If cluster has failed upgrade, follow steps outlined in previous section.
+## I'm receiving an error due to "PodDrainFailure"
+
+This error is due to the requested operation being blocked by a PodDisruptionBudget (PDB) that has been set on the deployments within the cluster. To learn more about how PodDisruptionBudgets work, please visit check out [the official Kubernetes example](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pdb-example).
+
+You may use this command to find the PDBs applied on your cluster:
+
+```
+kubectl get poddisruptionbudgets --all-namespaces
+```
+or
+```
+kubectl get poddisruptionbudgets -n {namespace of failed pod}
+```
+Please view the label selector to see the exact pods that are causing this failure.
+
+There are a few ways this error can occur:
+1. Your PDB may be too restrictive such as having a high minAvailable pod count, or low maxUnavailable pod count. You can change it by updating the PDB with less restrictive.
+2. During an upgrade, the replacement pods may not be ready fast enough. You can investigate your Pod Readiness times to attempt to fix this situation.
+3. The deployed pods may not work with the new upgraded node version, causing Pods to fail and fall below the PDB.
+
+>[!NOTE]
+ > If the pod is failing from the namespace 'kube-system', please contact support. This is a namespace managed by AKS.
+
+For more information about PodDisruptionBudgets, please check out the [official Kubernetes guide on configuring a PDB](https://kubernetes.io/docs/tasks/run-application/configure-pdb/).
+ ## Can I move my cluster to a different subscription or my subscription with my cluster to a new tenant? If you've moved your AKS cluster to a different subscription or the cluster's subscription to a new tenant, the cluster won't function because of missing cluster identity permissions. **AKS doesn't support moving clusters across subscriptions or tenants** because of this constraint.
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
description: Learn how to secure traffic that flows in and out of pods by using Kubernetes network policies in Azure Kubernetes Service (AKS) Previously updated : 03/29/2022 Last updated : 06/24/2022
First, let's create an AKS cluster that supports network policy.
> > The network policy feature can only be enabled when the cluster is created. You can't enable network policy on an existing AKS cluster.
-To use Azure Network Policy, you must use the [Azure CNI plug-in][azure-cni] and define your own virtual network and subnets. For more detailed information on how to plan out the required subnet ranges, see [configure advanced networking][use-advanced-networking]. Calico Network Policy could be used with either this same Azure CNI plug-in or with the Kubenet CNI plug-in.
+To use Azure Network Policy, you must use the [Azure CNI plug-in][azure-cni]. Calico Network Policy could be used with either this same Azure CNI plug-in or with the Kubenet CNI plug-in.
The following example script:
-* Creates a virtual network and subnet.
-* Creates an Azure Active Directory (Azure AD) service principal for use with the AKS cluster.
-* Assigns *Contributor* permissions for the AKS cluster service principal on the virtual network.
-* Creates an AKS cluster in the defined virtual network and enables network policy.
+* Creates an AKS cluster with system-assigned identity and enables network policy.
* The _Azure Network_ policy option is used. To use Calico as the network policy option instead, use the `--network-policy calico` parameter. Note: Calico could be used with either `--network-plugin azure` or `--network-plugin kubenet`.
-Note that instead of using a service principal, you can use a managed identity for permissions. For more information, see [Use managed identities](use-managed-identity.md).
+Instead of using a system-assigned identity, you can also use a user-assigned identity. For more information, see [Use managed identities](use-managed-identity.md).
-Provide your own secure *SP_PASSWORD*. You can replace the *RESOURCE_GROUP_NAME* and *CLUSTER_NAME* variables:
+### Create an AKS cluster for Azure network policies
+
+You can replace the *RESOURCE_GROUP_NAME* and *CLUSTER_NAME* variables:
```azurecli-interactive RESOURCE_GROUP_NAME=myResourceGroup-NP CLUSTER_NAME=myAKSCluster LOCATION=canadaeast
-# Create a resource group
-az group create --name $RESOURCE_GROUP_NAME --location $LOCATION
-
-# Create a virtual network and subnet
-az network vnet create \
- --resource-group $RESOURCE_GROUP_NAME \
- --name myVnet \
- --address-prefixes 10.0.0.0/8 \
- --subnet-name myAKSSubnet \
- --subnet-prefix 10.240.0.0/16
-
-# Create a service principal and read in the application ID
-SP=$(az ad sp create-for-rbac --output json)
-SP_ID=$(echo $SP | jq -r .appId)
-SP_PASSWORD=$(echo $SP | jq -r .password)
-
-# Wait 15 seconds to make sure that service principal has propagated
-echo "Waiting for service principal to propagate..."
-sleep 15
-
-# Get the virtual network resource ID
-VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP_NAME --name myVnet --query id -o tsv)
-
-# Assign the service principal Contributor permissions to the virtual network resource
-az role assignment create --assignee $SP_ID --scope $VNET_ID --role Contributor
-
-# Get the virtual network subnet resource ID
-SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP_NAME --vnet-name myVnet --name myAKSSubnet --query id -o tsv)
-```
-
-### Create an AKS cluster for Azure network policies
-
-Create the AKS cluster and specify the virtual network, service principal information, and *azure* for the network plugin and network policy.
+Create the AKS cluster and specify *azure* for the network plugin and network policy.
```azurecli az aks create \ --resource-group $RESOURCE_GROUP_NAME \ --name $CLUSTER_NAME \ --node-count 1 \
- --generate-ssh-keys \
- --service-cidr 10.0.0.0/16 \
- --dns-service-ip 10.0.0.10 \
- --docker-bridge-address 172.17.0.1/16 \
- --vnet-subnet-id $SUBNET_ID \
- --service-principal $SP_ID \
- --client-secret $SP_PASSWORD \
--network-plugin azure \ --network-policy azure ```
az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAM
### Create an AKS cluster for Calico network policies
-Create the AKS cluster and specify the virtual network, service principal information, *azure* for the network plugin, and *calico* for the network policy. Using *calico* as the network policy enables Calico networking on both Linux and Windows node pools.
+Create the AKS cluster and specify *azure* for the network plugin, and *calico* for the network policy. Using *calico* as the network policy enables Calico networking on both Linux and Windows node pools.
If you plan on adding Windows node pools to your cluster, include the `windows-admin-username` and `windows-admin-password` parameters with that meet the [Windows Server password requirements][windows-server-password]. To use Calico with Windows node pools, you also need to register the `Microsoft.ContainerService/EnableAKSWindowsCalico`.
az aks create \
--resource-group $RESOURCE_GROUP_NAME \ --name $CLUSTER_NAME \ --node-count 1 \
- --generate-ssh-keys \
- --service-cidr 10.0.0.0/16 \
- --dns-service-ip 10.0.0.10 \
- --docker-bridge-address 172.17.0.1/16 \
- --vnet-subnet-id $SUBNET_ID \
- --service-principal $SP_ID \
- --client-secret $SP_PASSWORD \
--windows-admin-username $WINDOWS_USERNAME \
- --vm-set-type VirtualMachineScaleSets \
- --kubernetes-version 1.20.2 \
--network-plugin azure \ --network-policy calico ```
aks Virtual Nodes Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes-cli.md
description: Learn how to use the Azure CLI to create an Azure Kubernetes Services (AKS) cluster that uses virtual nodes to run pods. Previously updated : 03/16/2021 Last updated : 06/25/2022
az network vnet subnet create \
--address-prefixes 10.241.0.0/16 ```
-## Create a service principal or use a managed identity
+## Create an AKS cluster with managed identity
-To allow an AKS cluster to interact with other Azure resources, a cluster identity is used. This cluster identity can be automatically created by the Azure CLI or portal, or you can pre-create one and assign additional permissions. By default, this cluster identity is a managed identity. For more information, see [Use managed identities](use-managed-identity.md). You can also use a service principal as your cluster identity. The following steps show you how to manually create and assign the service principal to your cluster.
-
-Create a service principal using the [az ad sp create-for-rbac][az-ad-sp-create-for-rbac] command.
-
-```azurecli-interactive
-az ad sp create-for-rbac
-```
-
-The output is similar to the following example:
-
-```output
-{
- "appId": "bef76eb3-d743-4a97-9534-03e9388811fc",
- "displayName": "azure-cli-2018-11-21-18-42-00",
- "name": "http://azure-cli-2018-11-21-18-42-00",
- "password": "1d257915-8714-4ce7-a7fb-0e5a5411df7f",
- "tenant": "72f988bf-86f1-41af-91ab-2d7cd011db48"
-}
-```
-
-Make a note of the *appId* and *password*. These values are used in the following steps.
-
-## Assign permissions to the virtual network
-
-To allow your cluster to use and manage the virtual network, you must grant the AKS service principal the correct rights to use the network resources.
-
-First, get the virtual network resource ID using [az network vnet show][az-network-vnet-show]:
-
-```azurecli-interactive
-az network vnet show --resource-group myResourceGroup --name myVnet --query id -o tsv
-```
-
-To grant the correct access for the AKS cluster to use the virtual network, create a role assignment using the [az role assignment create][az-role-assignment-create] command. Replace `<appId`> and `<vnetId>` with the values gathered in the previous two steps.
-
-```azurecli-interactive
-az role assignment create --assignee <appId> --scope <vnetId> --role Contributor
-```
-
-## Create an AKS cluster
+Instead of using a system-assigned identity, you can also use a user-assigned identity. For more information, see [Use managed identities](use-managed-identity.md).
You deploy an AKS cluster into the AKS subnet created in a previous step. Get the ID of this subnet using [az network vnet subnet show][az-network-vnet-subnet-show]:
You deploy an AKS cluster into the AKS subnet created in a previous step. Get th
az network vnet subnet show --resource-group myResourceGroup --vnet-name myVnet --name myAKSSubnet --query id -o tsv ```
-Use the [az aks create][az-aks-create] command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. Replace `<subnetId>` with the ID obtained in the previous step, and then `<appId>` and `<password>` with the values gathered in the previous section.
+Use the [az aks create][az-aks-create] command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. Replace `<subnetId>` with the ID obtained in the previous step.
```azurecli-interactive az aks create \
az aks create \
--name myAKSCluster \ --node-count 1 \ --network-plugin azure \
- --service-cidr 10.0.0.0/16 \
- --dns-service-ip 10.0.0.10 \
- --docker-bridge-address 172.17.0.1/16 \
--vnet-subnet-id <subnetId> \
- --service-principal <appId> \
- --client-secret <password>
``` After several minutes, the command completes and returns JSON-formatted information about the cluster.
az network profile delete --id $NETWORK_PROFILE_ID -y
SAL_ID=$(az network vnet subnet show --resource-group $RES_GROUP --vnet-name $AKS_VNET --name $AKS_SUBNET --query id --output tsv)/providers/Microsoft.ContainerInstance/serviceAssociationLinks/default # Delete the service association link for the subnet
-az resource delete --ids $SAL_ID --api-version {api-version}
+az resource delete --ids $SAL_ID --api-version 2021-10-01
# Delete the subnet delegation to Azure Container Instances az network vnet subnet update --resource-group $RES_GROUP --vnet-name $AKS_VNET --name $AKS_SUBNET --remove delegations
Virtual nodes are often one component of a scaling solution in AKS. For more inf
[az-provider-list]: /cli/azure/provider#az_provider_list [az-provider-register]: /cli/azure/provider#az_provider_register [virtual-nodes-aks]: virtual-nodes.md
-[virtual-nodes-networking-aci]: ../container-instances/container-instances-virtual-network-concepts.md
+[virtual-nodes-networking-aci]: ../container-instances/container-instances-virtual-network-concepts.md
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md
The table below shows the available query parameters, their allowed values, and
| Key | Allowed values | Description | Required | Type | |-|-|-|-|-|
-| `type` | `war`\|`jar`\|`ear`\|`lib`\|`startup`\|`static`\|`zip` | The type of the artifact being deployed, this sets the default target path and informs the web app how the deployment should be handled. <br/> - `type=zip`: Deploy a ZIP package by unzipping the content to `/home/site/wwwroot`. `path` parameter is optional. <br/> - `type=war`: Deploy a WAR package. By default, the WAR package is deployed to `/home/site/wwwroot/app.war`. The target path can be specified with `path`. <br/> - `type=jar`: Deploy a JAR package to `/home/site/wwwroot/app.jar`. The `path` parameter is ignored <br/> - `type=ear`: Deploy an EAR package to `/home/site/wwwroot/app.ear`. The `path` parameter is ignored <br/> - `type=lib`: Deploy a JAR library file. By default, the file is deployed to `/home/site/libs`. The target path can be specified with `path`. <br/> - `type=static`: Deploy a static file (e.g. a script). By default, the file is deployed to `/home/site/scripts`. The target path can be specified with `path`. <br/> - `type=startup`: Deploy a script that App Service automatically uses as the startup script for your app. By default, the script is deployed to `D:\home\site\scripts\<name-of-source>` for Windows and `home/site/wwwroot/startup.sh` for Linux. The target path can be specified with `path`. | Yes | String |
+| `type` | `war`\|`jar`\|`ear`\|`lib`\|`startup`\|`static`\|`zip` | The type of the artifact being deployed, this sets the default target path and informs the web app how the deployment should be handled. <br/> - `type=zip`: Deploy a ZIP package by unzipping the content to `/home/site/wwwroot`. `path` parameter is optional. <br/> - `type=war`: Deploy a WAR package. By default, the WAR package is deployed to `/home/site/wwwroot/app.war`. The target path can be specified with `path`. <br/> - `type=jar`: Deploy a JAR package to `/home/site/wwwroot/app.jar`. The `path` parameter is ignored <br/> - `type=ear`: Deploy an EAR package to `/home/site/wwwroot/app.ear`. The `path` parameter is ignored <br/> - `type=lib`: Deploy a JAR library file. By default, the file is deployed to `/home/site/libs`. The target path can be specified with `path`. <br/> - `type=static`: Deploy a static file (e.g. a script). By default, the file is deployed to `/home/site/wwwroot`. <br/> - `type=startup`: Deploy a script that App Service automatically uses as the startup script for your app. By default, the script is deployed to `D:\home\site\scripts\<name-of-source>` for Windows and `home/site/wwwroot/startup.sh` for Linux. The target path can be specified with `path`. | Yes | String |
| `restart` | `true`\|`false` | By default, the API restarts the app following the deployment operation (`restart=true`). To deploy multiple artifacts, prevent restarts on the all but the final deployment by setting `restart=false`. | No | Boolean | | `clean` | `true`\|`false` | Specifies whether to clean (delete) the target deployment before deploying the artifact there. | No | Boolean | | `ignorestack` | `true`\|`false` | The publish API uses the `WEBSITE_STACK` environment variable to choose safe defaults depending on your site's language stack. Setting this parameter to `false` disables any language-specific defaults. | No | Boolean |
app-service App Gateway With Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/app-gateway-with-service-endpoints.md
description: Describes how Application Gateway integrates with Azure App Service
documentationcenter: '' - editor: '' ms.assetid: 073eb49c-efa1-4760-9f0c-1fecd5c251cc
ms.devlang: azurecli
# Application Gateway integration
-There are three variations of App Service that require slightly different configuration of the integration with Azure Application Gateway. The variations include regular App Service - also known as multi-tenant, Internal Load Balancer (ILB) App Service Environment (ASE) and External ASE. This article will walk through how to configure it with App Service (multi-tenant) using service endpoint to secure traffic. The article will also discuss considerations around using private endpoint and integrating with ILB, and External ASE. Finally the article has considerations on scm/kudu site.
+There are three variations of App Service that require slightly different configuration of the integration with Azure Application Gateway. The variations include regular App Service - also known as multi-tenant, Internal Load Balancer (ILB) App Service Environment and External App Service Environment. This article will walk through how to configure it with App Service (multi-tenant) using service endpoint to secure traffic. The article will also discuss considerations around using private endpoint and integrating with ILB, and External App Service Environment. Finally the article has considerations on scm/kudu site.
## Integration with App Service (multi-tenant) App Service (multi-tenant) has a public internet facing endpoint. Using [service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) you can allow traffic only from a specific subnet within an Azure Virtual Network and block everything else. In the following scenario, we'll use this functionality to ensure that an App Service instance can only receive traffic from a specific Application Gateway instance.
As an alternative to service endpoint, you can use private endpoint to secure tr
:::image type="content" source="./media/app-gateway-with-service-endpoints/private-endpoint-appgw.png" alt-text="Diagram shows the traffic flowing to an Application Gateway in an Azure Virtual Network and flowing from there through a private endpoint to instances of apps in App Service.":::
+Application Gateway will cache the DNS lookup results, so if you use FQDNs and rely on DNS lookup to get the private IP address, then you may need to restart the Application Gateway if the DNS update or link to Azure private DNS zone was done after configuring the backend pool. To restart the Application Gateway, you must start and stop the instance. You can do this with Azure CLI:
+
+```azurecli-interactive
+az network application-gateway stop --resource-group myRG --name myAppGw
+az network application-gateway start --resource-group myRG --name myAppGw
+```
+ ## Considerations for ILB ASE
-ILB ASE isn't exposed to the internet and traffic between the instance and an Application Gateway is therefore already isolated to the Virtual Network. The following [how-to guide](../environment/integrate-with-application-gateway.md) configures an ILB ASE and integrates it with an Application Gateway using Azure portal.
+ILB App Service Environment isn't exposed to the internet and traffic between the instance and an Application Gateway is therefore already isolated to the Virtual Network. The following [how-to guide](../environment/integrate-with-application-gateway.md) configures an ILB App Service Environment and integrates it with an Application Gateway using Azure portal.
-If you want to ensure that only traffic from the Application Gateway subnet is reaching the ASE, you can configure a Network security group (NSG) which affect all web apps in the ASE. For the NSG, you are able to specify the subnet IP range and optionally the ports (80/443). Make sure you don't override the [required NSG rules](../environment/network-info.md#network-security-groups) for ASE to function correctly.
+If you want to ensure that only traffic from the Application Gateway subnet is reaching the App Service Environment, you can configure a Network security group (NSG) which affect all web apps in the App Service Environment. For the NSG, you are able to specify the subnet IP range and optionally the ports (80/443). Make sure you don't override the [required NSG rules](../environment/network-info.md#network-security-groups) for App Service Environment to function correctly.
To isolate traffic to an individual web app you'll need to use ip-based access restrictions as service endpoints will not work for ASE. The IP address should be the private IP of the Application Gateway instance. ## Considerations for External ASE
-External ASE has a public facing load balancer like multi-tenant App Service. Service endpoints don't work for ASE, and that's why you'll have to use ip-based access restrictions using the public IP of the Application Gateway instance. To create an External ASE using the Azure portal, you can follow this [Quickstart](../environment/create-external-ase.md)
+External App Service Environment has a public facing load balancer like multi-tenant App Service. Service endpoints don't work for App Service Environment, and that's why you'll have to use ip-based access restrictions using the public IP of the Application Gateway instance. To create an External App Service Environment using the Azure portal, you can follow this [Quickstart](../environment/create-external-ase.md)
[template-app-gateway-app-service-complete]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-with-app-gateway-v2/ "Azure Resource Manager template for complete scenario"
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
If the virtual network is in a different subscription than the app, you must ens
You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure regional virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of your app. Examples are container image pull and app settings with Key Vault reference. [Network routing](#network-routing) is the ability to handle how both app and configuration traffic are routed from your virtual network and out.
-By default, only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) sent from your app is routed through the virtual network integration. Unless you configure application routing or configuration routing options, all other traffic will not be sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it is sent through the virtual network integration.
+Through application routing or configuration routing options, you can configure what traffic will be sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it is sent through the virtual network integration.
#### Application routing
-Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during start up. When you configure application routing, you can either route all traffic or only private traffic into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled.
+Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during start up. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled.
> [!NOTE] > * Only traffic configured in application or configuration routing is subject to the NSGs and UDRs that are applied to your integration subnet.
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
code .
|:-|--:| | [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-1-240-px.png" alt-text="A Screenshot of the Azure Tools icon in the left toolbar of VS Code." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-1.png"::: | | [!INCLUDE [Create app service step 2](<./includes/quickstart-python/create-app-service-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-2-240-px.png" alt-text="A screenshot of the App Service section of Azure Tools extension and the context menu used to create a new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-2.png"::: |
-| [!INCLUDE [Create app service step 3](<./includes/quickstart-python/create-app-service-visual-studio-code-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-3-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to select the folder to deploy for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-3.png"::: |
| [!INCLUDE [Create app service step 4](<./includes/quickstart-python/create-app-service-visual-studio-code-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-4-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to select Create a new Web App." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-4.png"::: | | [!INCLUDE [Create app service step 5](<./includes/quickstart-python/create-app-service-visual-studio-code-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-5-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to enter the globally unique name for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-5.png"::: | | [!INCLUDE [Create app service step 6](<./includes/quickstart-python/create-app-service-visual-studio-code-6.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-6-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to select the runtime stack for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-6.png"::: |
application-gateway Application Gateway Key Vault Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-key-vault-common-errors.md
This article helps you understand the details of the error codes and the steps t
## List of error codes and their details
-The following sections describe the various errors you might encounter. You can verify if your gateway has any such problem by visting [**Azure Advisor**](./key-vault-certs.md#investigating-and-resolving-key-vault-errors) for your account, and use this troubleshooting article to fix the problem. We recommend configuring Azure Advisor alerts to stay informed when a key vault problem is detected for your gateway.
+The following sections describe the various errors you might encounter. You can verify if your gateway has any such problem by visiting [**Azure Advisor**](./key-vault-certs.md#investigating-and-resolving-key-vault-errors) for your account, and use this troubleshooting article to fix the problem. We recommend configuring Azure Advisor alerts to stay informed when a key vault problem is detected for your gateway.
> [!NOTE] > Azure Application Gateway generates logs for key vault diagnostics every four hours. If the diagnostic continues to show the error after you have fixed the configuration, you might have to wait for the logs to be refreshed.
The following sections describe the various errors you might encounter. You can
1. Under **Secret Management Operations**, select the **Get** permission. 1. Select **Save**. For more information, see [Assign a Key Vault access policy by using the Azure portal](../key-vault/general/assign-access-policy-portal.md).
For more information, see [Azure role-based access control in Key Vault](../key-
On the other hand, if a certificate object is permanently deleted, you will need to create a new certificate and update Application Gateway with the new certificate details. When you're configuring through the Azure CLI or Azure PowerShell, use a secret identifier URI without a version. This choice allows instances to retrieve a renewed version of the certificate, if it exists. [comment]: # (Error Code 4) ### Error code: UserAssignedManagedIdentityNotFound
applied-ai-services Get Started Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/get-started-sdk-rest-api.md
# Get started with Form Recognizer client library SDKs or REST API
-Get started with Azure Form Recognizer using the programming language of your choice. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+Get started with Azure Form Recognizer using the programming language of your choice. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page.
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
recommendations: false
[Reference documentation](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true) | [Library Source Code](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer) | [Package (NuGet)](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.4) | [Samples](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/README.md)
-Get started with Azure Form Recognizer using the C# programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+Get started with Azure Form Recognizer using the C# programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page.
-In this quickstart, you'll use following features to analyze and extract data and values from forms and documents:
+In this quickstart, you'll use the following features to analyze and extract data and values from forms and documents:
* [🆕 **General document model**](#general-document-model)—Analyze and extract text, tables, structure, key-value pairs, and named entities.
-* [**Layout model**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
+* [**Layout model**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in documents, without the need to train a model.
* [**Prebuilt model**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a prebuilt model.
In this quickstart, you'll use following features to analyze and extract data an
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production. > [!TIP]
-> Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'lll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
+> Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart:
applied-ai-services Try V3 Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-java-sdk.md
recommendations: false
[Reference documentation](/jav)
-Get started with Azure Form Recognizer using the Java programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+Get started with Azure Form Recognizer using the Java programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page.
-In this quickstart you'll use following features to analyze and extract data and values from forms and documents:
+In this quickstart you'll use the following features to analyze and extract data and values from forms and documents:
* [🆕 **General document**](#general-document-model)—Analyze and extract text, tables, structure, key-value pairs, and named entities.
-* [**Layout**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
+* [**Layout**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in documents, without the need to train a model.
* [**Prebuilt Invoice**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a pre-trained model.
In this quickstart you'll use following features to analyze and extract data and
* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production. > [!TIP]
- > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'lll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
+ > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. Later, you'll paste your key and endpoint into the code below:
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
Get started with Azure Form Recognizer using the JavaScript programming language
To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page.
-In this quickstart you'll use following features to analyze and extract data and values from forms and documents:
+In this quickstart you'll use the following features to analyze and extract data and values from forms and documents:
* [🆕 **General document**](#general-document-model)—Analyze and extract key-value pairs, selection marks, and entities from documents.
-* [**Layout**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
+* [**Layout**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in documents, without the need to train a model.
* [**Prebuilt Invoice**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a pre-trained invoice model.
In this quickstart you'll use following features to analyze and extract data and
* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production. > [!TIP]
- > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'lll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
+ > Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart:
applied-ai-services Try V3 Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-python-sdk.md
Previously updated : 06/23/2022 Last updated : 06/24/2022 recommendations: false
recommendations: false
[Reference documentation](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer) | [Package (PyPi)](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b5/) | [Samples](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/samples)
-Get started with Azure Form Recognizer using the Python programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+Get started with Azure Form Recognizer using the Python programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page.
-In this quickstart you'll use following features to analyze and extract data and values from forms and documents:
+In this quickstart you'll use the following features to analyze and extract data and values from forms and documents:
* [🆕 **General document**](#general-document-model)—Analyze and extract text, tables, structure, key-value pairs, and named entities.
-* [**Layout**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
+* [**Layout**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in documents, without the need to train a model.
* [**Prebuilt Invoice**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a pre-trained model.
In this quickstart you'll use following features to analyze and extract data and
* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production. > [!TIP]
-> Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'lll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
+> Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart:
Analyze and extract common fields from specific document types using a prebuilt
> > * Analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart. > * We've added the file URL value to the `invoiceUrl` variable at the top of the file.
-> * To analyze a given file at a URI, you'll use the `beginAnalyzeDocuments` method and pass `PrebuiltModels.Invoice` as the model Id. The returned value is a `result` object containing data about the submitted document.
+> * To analyze a given file at a URI, you'll use the `begin_analyze_document_from_url` method and pass `prebuilt-invoice` as the model Id. The returned value is a `result` object containing data about the submitted document.
> * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page. **Add the following code sample to your form_recognizer_quickstart.py application. Make sure you update the key and endpoint variables with values from your Azure portal Form Recognizer instance:**
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
The current API version is **2022-06-30-preview**.
| [Form Recognizer REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) | [Azure SDKS](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html) |
-Get started with Azure Form Recognizer using the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models using the REST API or by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+Get started with Azure Form Recognizer using the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models using the REST API or by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page.
To learn more about Form Recognizer features and development options, visit our
* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production. > [!TIP]
-> Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'lll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
+> Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart:
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
1. Use the system-assigned Managed Identity for the Automation account:
- 1. [Configure](/enable-managed-identity-for-automation.md#enable-a-system-assigned-managed-identity-for-an-azure-automation-account) a System-assigned Managed Identity for the Automation account.
- 1. Grant this identity the [required permissions](/enable-managed-identity-for-automation.md#assign-role-to-a-system-assigned-managed-identity) within the Subscription to perform its task.
+ 1. [Configure](enable-managed-identity-for-automation.md#enable-a-system-assigned-managed-identity-for-an-azure-automation-account) a System-assigned Managed Identity for the Automation account.
+ 1. Grant this identity the [required permissions](enable-managed-identity-for-automation.md#assign-role-to-a-system-assigned-managed-identity) within the Subscription to perform its task.
1. Update the runbook to use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As account and perform the associated account management. ```powershell
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
# [VM's system-assigned managed identity](#tab/sa-mi)
- 1. [Configure](/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#enable-system-assigned-managed-identity-on-an-existing-vm) a System Managed Identity for the VM.
- 1. Grant this identity the [required permissions](/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm#grant-your-vm-access-to-a-resource-group-in-resource-manager) within the subscription to perform its tasks.
+ 1. [Configure](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss#enable-system-assigned-managed-identity-on-an-existing-vm) a System Managed Identity for the VM.
+ 1. Grant this identity the [required permissions](../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md#grant-your-vm-access-to-a-resource-group-in-resource-manager) within the subscription to perform its tasks.
1. Update the runbook to use the [Connect-Az-Account](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As Account and perform the associated account management. ```powershell
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
# [VM's user-assigned managed identity](#tab/ua-mi)
- 1. [Configure](/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#user-assigned-managed-identity) a User Managed Identity for the VM.
- 1. Grant this identity the [required permissions](/active-directory/managed-identities-azure-resources/howto-assign-access-portal) within the Subscription to perform its tasks.
+ 1. [Configure](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss#user-assigned-managed-identity) a User Managed Identity for the VM.
+ 1. Grant this identity the [required permissions](/azure/active-directory/managed-identities-azure-resources/howto-assign-access-portal) within the Subscription to perform its tasks.
1. Update the runbook to use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity ` and `AccountID` parameters to authenticate to Azure resources. This configuration reduces the need to use a Run As account and perform the associated account management. ```powershell
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
In the Product Catalog, always-available services are listed as "non-regional" s
| [Azure HPC Cache](../hpc-cache/hpc-cache-overview.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | [Azure IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Azure Red Hat OpenShift | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
-| [Azure SQL Managed Instance for Apache Cassandra](../managed-instance-apache-cassandr) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Managed Instance for Apache Cassandra](../managed-instance-apache-cassandr) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| Azure Storage: Ultra Disk | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | ### ![An icon that signifies this service is non-regional.](media/icon-always-available.svg) Non-regional services (always-available services)
availability-zones Migrate App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-app-service-environment.md
Title: Migrate Azure App Service Environment to availability zone support description: Learn how to migrate an Azure App Service Environment to availability zone support. -+ Last updated 06/08/2022
availability-zones Migrate App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-app-service.md
Title: Migrate Azure App Service to availability zone support description: Learn how to migrate Azure App Service to availability zone support. -+ Last updated 06/07/2022
availability-zones Migrate Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-storage.md
Title: Migrate Azure Storage accounts to availability zone support description: Learn how to migrate your Azure storage accounts to availability zone support. -+ Last updated 05/09/2022
availability-zones Migrate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-vm.md
Title: Migrate Azure Virtual Machines and Azure Virtual Machine Scale Sets to availability zone support description: Learn how to migrate your Azure Virtual Machines and Virtual Machine Scale Sets to availability zone support. -+ Last updated 04/21/2022
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
This topic describes the networking requirements for using the Connected Machine
The Azure Connected Machine agent for Linux and Windows communicates outbound securely to Azure Arc over TCP port 443. By default, the agent uses the default route to the internet to reach Azure services. You can optionally [configure the agent to use a proxy server](manage-agent.md#update-or-remove-proxy-settings) if your network requires it. Proxy servers don't make the Connected Machine agent more secure because the traffic is already encrypted.
-To further secure your network connectivity to Azure Arc, instead of using public networks and proxy servers, you can implement an [Azure Arc Private Link Scope](private-link-security.md) (preview).
+To further secure your network connectivity to Azure Arc, instead of using public networks and proxy servers, you can implement an [Azure Arc Private Link Scope](private-link-security.md) .
> [!NOTE] > Azure Arc-enabled servers does not support using a [Log Analytics gateway](../../azure-monitor/agents/gateway.md) as a proxy for the Connected Machine agent.
To ensure the security of data in transit to Azure, we strongly encourage you to
* Review additional [prerequisites for deploying the Connected Machine agent](prerequisites.md). * Before you deploy the Azure Arc-enabled servers agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md).
-* To resolve problems, review the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md).
+* To resolve problems, review the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md).
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
Title: Configure monitoring for Azure Functions description: Learn how to connect your function app to Application Insights for monitoring and how to configure data collection. Previously updated : 8/31/2020 Last updated : 06/23/2022 -
-# Customer intent: As a developer, I want to understand how to correctly configure monitoring for my functions so I can collect the data that I need.
+
+# Customer intent: As a developer, I want to understand how to configure monitoring for my functions correctly, so I can collect the data that I need.
# How to configure monitoring for Azure Functions
-Azure Functions integrates with Application Insights to better enable you to monitor your function apps. Application Insights, a feature of Azure Monitor, is an extensible Application Performance Management (APM) service that collects data generated by your function app, including information your app writes to logs. Application Insights integration is typically enabled when your function app is created. If your app doesn't have the instrumentation key set, you must first [enable Application Insights integration](#enable-application-insights-integration).
+Azure Functions integrates with Application Insights to better enable you to monitor your function apps. Application Insights, a feature of Azure Monitor, is an extensible Application Performance Management (APM) service that collects data generated by your function app, including information your app writes to logs. Application Insights integration is typically enabled when your function app is created. If your app doesn't have the instrumentation key set, you must first [enable Application Insights integration](#enable-application-insights-integration).
-You can use Application Insights without any custom configuration. The default configuration can result in high volumes of data. If you're using a Visual Studio Azure subscription, you might hit your data cap for Application Insights. To learn more about Application Insights costs, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing). For more information, see [Solutions with high-volume of telemetry](#solutions-with-high-volume-of-telemetry).
+You can use Application Insights without any custom configuration. The default configuration can result in high volumes of data. If you're using a Visual Studio Azure subscription, you might hit your data cap for Application Insights. For information about Application Insights costs, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing). For more information, see [Solutions with high-volume of telemetry](#solutions-with-high-volume-of-telemetry).
-Later in this article, you learn how to configure and customize the data that your functions send to Application Insights. For a function app, logging is configured in the [host.json] file.
+Later in this article, you learn how to configure and customize the data that your functions send to Application Insights. For a function app, logging is configured in the *[host.json]* file.
> [!NOTE]
-> You can use specially configured application settings to represent specific settings in a host.json file for a specific environment. This lets you effectively change host.json settings without having to republish the host.json file in your project. To learn more, see [Override host.json values](functions-host-json.md#override-hostjson-values).
+> You can use specially configured application settings to represent specific settings in a *host.json* file for a specific environment. This lets you effectively change *host.json* settings without having to republish the *host.json* file in your project. For more information, see [Override host.json values](functions-host-json.md#override-hostjson-values).
## Configure categories
-The Azure Functions logger includes a *category* for every log. The category indicates which part of the runtime code or your function code wrote the log. Categories differ between version 1.x and later versions. The following chart describes the main categories of logs that the runtime creates.
+The Azure Functions logger includes a *category* for every log. The category indicates which part of the runtime code or your function code wrote the log. Categories differ between version 1.x and later versions. The following chart describes the main categories of logs that the runtime creates:
# [v2.x+](#tab/v2) | Category | Table | Description | | -- | -- | -- |
-| **`Function.<YOUR_FUNCTION_NAME>`** | **dependencies**| Dependency data is automatically collected for some services. For successful runs, these logs are at the `Information` level. Not supported for C# apps running in an [isolated process](dotnet-isolated-process-guide.md). To learn more, see [Dependencies](functions-monitoring.md#dependencies). Exceptions are logged at the `Error` level. The runtime also creates `Warning` level logs, such as when queue messages are sent to the [poison queue](functions-bindings-storage-queue-trigger.md#poison-messages). |
-| **`Function.<YOUR_FUNCTION_NAME>`** | **customMetrics**<br/>**customEvents** | C# and JavaScript SDKs let you collect custom metrics and log custom events. To learn more, see [Custom telemetry data](functions-monitoring.md#custom-telemetry-data).|
-| **`Function.<YOUR_FUNCTION_NAME>`** | **traces**| Includes function started and completed logs for specific function runs. For successful runs, these logs are at the `Information` level. Exceptions are logged at the `Error` level. The runtime also creates `Warning` level logs, such as when queue messages are sent to the [poison queue](functions-bindings-storage-queue-trigger.md#poison-messages). |
-| **`Function.<YOUR_FUNCTION_NAME>.User`** | **traces**| User-generated logs, which can be any log level. To learn more about writing to logs from your functions, see [Writing to logs](functions-monitoring.md#writing-to-logs). |
+| **`Function.<YOUR_FUNCTION_NAME>`** | **dependencies**| Dependency data is automatically collected for some services. For successful runs, these logs are at the `Information` level. For more information, see [Dependencies](functions-monitoring.md#dependencies). Exceptions are logged at the `Error` level. The runtime also creates `Warning` level logs, such as when queue messages are sent to the [poison queue](functions-bindings-storage-queue-trigger.md#poison-messages). |
+| **`Function.<YOUR_FUNCTION_NAME>`** | **customMetrics**<br/>**customEvents** | C# and JavaScript SDKs lets you collect custom metrics and log custom events. For more information, see [Custom telemetry data](functions-monitoring.md#custom-telemetry-data).|
+| **`Function.<YOUR_FUNCTION_NAME>`** | **traces**| Includes function started and completed logs for specific function runs. For successful runs, these logs are at the `Information` level. Exceptions are logged at the `Error` level. The runtime also creates `Warning` level logs, such as when queue messages are sent to the [poison queue](functions-bindings-storage-queue-trigger.md#poison-messages). |
+| **`Function.<YOUR_FUNCTION_NAME>.User`** | **traces**| User-generated logs, which can be any log level. For more information about writing to logs from your functions, see [Writing to logs](functions-monitoring.md#writing-to-logs). |
| **`Host.Aggregator`** | **customMetrics** | These runtime-generated logs provide counts and averages of function invocations over a [configurable](#configure-the-aggregator) period of time. The default period is 30 seconds or 1,000 results, whichever comes first. Examples are the number of runs, success rate, and duration. All of these logs are written at `Information` level. If you filter at `Warning` or above, you won't see any of this data. | | **`Host.Results`** | **requests** | These runtime-generated logs indicate success or failure of a function. All of these logs are written at `Information` level. If you filter at `Warning` or above, you won't see any of this data. |
-| **`Microsoft`** | **traces** | Fully-qualified log category that reflects a .NET runtime component invoked by the host. |
-| **`Worker`** | **traces** | Logs generated by the language worker process for non-.NET languages. Language worker logs may also be logged in a `Microsoft.*` category, such as `Microsoft.Azure.WebJobs.Script.Workers.Rpc.RpcFunctionInvocationDispatcher`. These logs are written at `Information` level.|
+| **`Microsoft`** | **traces** | Fully qualified log category that reflects a .NET runtime component invoked by the host. |
+| **`Worker`** | **traces** | Logs generated by the language worker process for non-.NET languages. Language worker logs might also be logged in a `Microsoft.*` category, such as `Microsoft.Azure.WebJobs.Script.Workers.Rpc.RpcFunctionInvocationDispatcher`. These logs are written at `Information` level.|
-For .NET class library functions, these categories assume you are using `ILogger` and not `ILogger<T>`. To learn more, see the [Functions ILogger documentation](functions-dotnet-class-library.md#ilogger).
+> [!NOTE]
+> For .NET class library functions, these categories assume you're using `ILogger` and not `ILogger<T>`. For more information, see the [Functions ILogger documentation](functions-dotnet-class-library.md#ilogger).
# [v1.x](#tab/v1) | Category | Table | Description | | -- | -- | -- |
-| **`Function`** | **traces**| User-generated logs, which can be any log level. To learn more about writing to logs from your functions, see [Writing to logs](functions-monitoring.md#writing-to-logs). |
+| **`Function`** | **traces**| User-generated logs, which can be any log level. For more information about writing to logs from your functions, see [Writing to logs](functions-monitoring.md#writing-to-logs). |
| **`Host.Aggregator`** | **customMetrics** | These runtime-generated logs provide counts and averages of function invocations over a [configurable](#configure-the-aggregator) period of time. The default period is 30 seconds or 1,000 results, whichever comes first. Examples are the number of runs, success rate, and duration. All of these logs are written at `Information` level. If you filter at `Warning` or above, you won't see any of this data. | | **`Host.Executor`** | **traces** | Includes **Function started** and **Function completed** logs for specific function runs. For successful runs, these logs are `Information` level. Exceptions are logged at `Error` level. The runtime also creates `Warning` level logs, such as when queue messages are sent to the [poison queue](functions-bindings-storage-queue-trigger.md#poison-messages). | | **`Host.Results`** | **requests** | These runtime-generated logs indicate success or failure of a function. All of these logs are written at `Information` level. If you filter at `Warning` or above, you won't see any of this data. |
-The **Table** column indicates to which table in Application Insights the log is written.
+The **Table** column indicates to which table in Application Insights the log is written.
## Configure log levels [!INCLUDE [functions-log-levels](../../includes/functions-log-levels.md)]
-For each category, you indicate the minimum log level to send. The host.json settings vary depending on the [Functions runtime version](functions-versions.md).
+For each category, you indicate the minimum log level to send. The *host.json* settings vary depending on the [Functions runtime version](functions-versions.md).
The example below defines logging based on the following rules:
-+ For logs of `Host.Results` or `Function`, only log events at `Error` or a higher level.
++ For logs of `Host.Results` or `Function`, only log events at `Error` or a higher level. + For logs of `Host.Aggregator`, log all generated metrics (`Trace`). + For all other logs, including user logs, log only `Information` level and higher events.
The example below defines logging based on the following rules:
-If [host.json] includes multiple logs that start with the same string, the more defined logs ones are matched first. Consider the following example that logs everything in the runtime, except `Host.Aggregator`, at the `Error` level:
+If *[host.json]* includes multiple logs that start with the same string, the more defined logs ones are matched first. Consider the following example that logs everything in the runtime, except `Host.Aggregator`, at the `Error` level:
# [v2.x+](#tab/v2)
If [host.json] includes multiple logs that start with the same string, the more
-You can use a log level setting of `None` prevent any logs from being written for a category.
+You can use a log level setting of `None` to prevent any logs from being written for a category.
> [!CAUTION]
-> Azure Functions integrates with Application Insights by storing telemetry events in Application Insights tables, setting a category log level to any value different from `Information` will prevent the telemetry to flow to those tables, as outcome, you will not be able to see the related data in Application Insights or Function Monitor tab.
+> Azure Functions integrates with Application Insights by storing telemetry events in Application Insights tables. Setting a category log level to any value different from `Information` will prevent the telemetry to flow to those tables. As outcome, you won't be able to see the related data in **Application Insights** or **Function Monitor** tab.
> > From above samples:
-> * If `Host.Results` category is set to `Error` log level, it will only gather host execution telemetry events in the `requests` table for failed function executions, preventing to display host execution details of success executions in both Application Insights and Function Monitor tab.
-> * If `Function` category is set to `Error` log level, it will stop gathering function telemetry data related to `dependencies`, `customMetrics`, and `customEvents` for all the functions, preventing to see any of this data in Application Insights. It will only gather `traces` logged with `Error` level.
+> + If the `Host.Results` category is set to `Error` log level, it will only gather host execution telemetry events in the `requests` table for failed function executions, preventing to display host execution details of success executions in both the **Application Insights** and **Function Monitor** tab.
+> + If the `Function` category is set to `Error` log level, it will stop gathering function telemetry data related to `dependencies`, `customMetrics`, and `customEvents` for all the functions, preventing to see any of this data in Application Insights. It will only gather `traces` logged with `Error` level.
>
-> In both cases you will continue to collect errors and exceptions data in Application Insights and Function Monitor tab. For more information, see [Solutions with high-volume of telemetry](#solutions-with-high-volume-of-telemetry).
-
+> In both cases you will continue to collect errors and exceptions data in the **Application Insights** and **Function Monitor** tab. For more information, see [Solutions with high-volume of telemetry](#solutions-with-high-volume-of-telemetry).
## Configure the aggregator
-As noted in the previous section, the runtime aggregates data about function executions over a period of time. The default period is 30 seconds or 1,000 runs, whichever comes first. You can configure this setting in the [host.json] file. Here's an example:
+As noted in the previous section, the runtime aggregates data about function executions over a period of time. The default period is 30 seconds or 1,000 runs, whichever comes first. You can configure this setting in the *[host.json]* file. Here's an example:
```json {
As noted in the previous section, the runtime aggregates data about function exe
## Configure sampling
-Application Insights has a [sampling](../azure-monitor/app/sampling.md) feature that can protect you from producing too much telemetry data on completed executions at times of peak load. When the rate of incoming executions exceeds a specified threshold, Application Insights starts to randomly ignore some of the incoming executions. The default setting for maximum number of executions per second is 20 (five in version 1.x). You can configure sampling in [host.json](./functions-host-json.md#applicationinsights). Here's an example:
+Application Insights has a [sampling](../azure-monitor/app/sampling.md) feature that can protect you from producing too much telemetry data on completed executions at times of peak load. When the rate of incoming executions exceeds a specified threshold, Application Insights starts to randomly ignore some of the incoming executions. The default setting for maximum number of executions per second is 20 (five in version 1.x). You can configure sampling in [*host.json*](./functions-host-json.md#applicationinsights). Here's an example:
# [v2.x+](#tab/v2)
Application Insights has a [sampling](../azure-monitor/app/sampling.md) feature
} ```
+You can exclude certain types of telemetry from sampling. In this example, data of type `Request` and `Exception` is excluded from sampling. It will ensure that *all* function executions (requests) and exceptions are logged while other types of telemetry remain subject to sampling.
-You can exclude certain types of telemetry from sampling. In this example, data of type `Request` and `Exception` is excluded from sampling. This makes sure that *all* function executions (requests) and exceptions are logged while other types of telemetry remain subject to sampling.
-
-# [v1.x](#tab/v1)
+# [v1.x](#tab/v1)
```json {
You can exclude certain types of telemetry from sampling. In this example, data
```
-To learn more, see [Sampling in Application Insights](../azure-monitor/app/sampling.md).
+For more information, see [Sampling in Application Insights](../azure-monitor/app/sampling.md).
## Configure scale controller logs
-_This feature is in preview._
+_This feature is in preview._
You can have the [Azure Functions scale controller](./event-driven-scaling.md#runtime-scaling) emit logs to either Application Insights or to Blob storage to better understand the decisions the scale controller is making for your function app.
-To enable this feature, you add an application setting named `SCALE_CONTROLLER_LOGGING_ENABLED` to your function app settings. The value of this setting must be of the format `<DESTINATION>:<VERBOSITY>`, based on the following:
+To enable this feature, you can add an application setting named `SCALE_CONTROLLER_LOGGING_ENABLED` to your function app settings. The following value of the setting must be in the format `<DESTINATION>:<VERBOSITY>`:
[!INCLUDE [functions-scale-controller-logging](../../includes/functions-scale-controller-logging.md)]
az functionapp config appsettings set --name <FUNCTION_APP_NAME> \
--settings SCALE_CONTROLLER_LOGGING_ENABLED=AppInsights:Verbose ```
-In this example, replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and the resource group name, respectively.
+In this example, replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and the resource group name, respectively.
The following Azure CLI command disables logging by setting the verbosity to `None`:
az functionapp config appsettings delete --name <FUNCTION_APP_NAME> \
--setting-names SCALE_CONTROLLER_LOGGING_ENABLED ```
-With scale controller logging enabled, you are now able to [query your scale controller logs](analyze-telemetry-data.md#query-scale-controller-logs).
+With scale controller logging enabled, you're now able to [query your scale controller logs](analyze-telemetry-data.md#query-scale-controller-logs).
## Enable Application Insights integration For a function app to send data to Application Insights, it needs to know the instrumentation key of an Application Insights resource. The key must be in an app setting named **APPINSIGHTS_INSTRUMENTATIONKEY**.
-When you create your function app [in the Azure portal](./functions-get-started.md), from the command line by using [Azure Functions Core Tools](./create-first-function-cli-csharp.md), or by using [Visual Studio Code](./create-first-function-vs-code-csharp.md), Application Insights integration is enabled by default. The Application Insights resource has the same name as your function app, and it's created either in the same region or in the nearest region.
+When you create your function app in the [Azure portal](./functions-get-started.md) from the command line by using [Azure Functions Core Tools](./create-first-function-cli-csharp.md) or [Visual Studio Code](./create-first-function-vs-code-csharp.md), Application Insights integration is enabled by default. The Application Insights resource has the same name as your function app, and it's created either in the same region or in the nearest region.
### New function app in the portal
-To review the Application Insights resource being created, select it to expand the **Application Insights** window. You can change the **New resource name** or choose a different **Location** in an [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/) where you want to store your data.
+To review the Application Insights resource being created, select it to expand the **Application Insights** window. You can change the **New resource name** or select a different **Location** in an [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/) where you want to store your data.
-![Enable Application Insights while creating a function app](media/functions-monitoring/enable-ai-new-function-app.png)
-When you choose **Create**, an Application Insights resource is created with your function app, which has the `APPINSIGHTS_INSTRUMENTATIONKEY` set in application settings. Everything is ready to go.
+When you select **Create**, an Application Insights resource is created with your function app, which has the `APPINSIGHTS_INSTRUMENTATIONKEY` set in application settings. Everything is ready to go.
<a id="manually-connect-an-app-insights-resource"></a> ### Add to an existing function app If an Application Insights resource wasn't created with your function app, use the following steps to create the resource. You can then add the instrumentation key from that resource as an [application setting](functions-how-to-use-azure-function-app-settings.md#settings) in your function app.
-1. In the [Azure portal](https://portal.azure.com), search for and select **function app**, and then choose your function app.
+1. In the [Azure portal](https://portal.azure.com), search for and select **function app**, and then select your function app.
1. Select the **Application Insights is not configured** banner at the top of the window. If you don't see this banner, then your app might already have Application Insights enabled.
- :::image type="content" source="media/configure-monitoring/enable-application-insights.png" alt-text="Enable Application Insights from the portal":::
+ :::image type="content" source="media/configure-monitoring/enable-application-insights.png" alt-text="Screenshot of enabling Application Insights from the portal.":::
-1. Expand **Change your resource** and create an Application Insights resource by using the settings specified in the following table.
+1. Expand **Change your resource** and create an Application Insights resource by using the settings specified in the following table:
| Setting | Suggested value | Description | | | - | -- |
- | **New resource name** | Unique app name | It's easiest to use the same name as your function app, which must be unique in your subscription. |
- | **Location** | West Europe | If possible, use the same [region](https://azure.microsoft.com/regions/) as your function app, or one that's close to that region. |
+ | **New resource name** | Unique app name | It's easiest to use the same name as your function app, which must be unique in your subscription. |
+ | **Location** | West Europe | If possible, use the same [region](https://azure.microsoft.com/regions/) as your function app, or the one that's close to that region. |
- :::image type="content" source="media/configure-monitoring/ai-general.png" alt-text="Create an Application Insights resource":::
+ :::image type="content" source="media/configure-monitoring/ai-general.png" alt-text="Screenshot of creating an Application Insights resource.":::
-1. Select **Apply**.
+1. Select **Apply**.
- The Application Insights resource is created in the same resource group and subscription as your function app. After the resource is created, close the Application Insights window.
+ The Application Insights resource is created in the same resource group and subscription as your function app. After the resource is created, close the **Application Insights** window.
1. In your function app, select **Configuration** under **Settings**, and then select **Application settings**. If you see a setting named `APPINSIGHTS_INSTRUMENTATIONKEY`, Application Insights integration is enabled for your function app running in Azure. If for some reason this setting doesn't exist, add it using your Application Insights instrumentation key as the value. > [!NOTE]
-> Early versions of Functions used built-in monitoring, which is no longer recommended. When enabling Application Insights integration for such a function app, you must also [disable built-in logging](#disable-built-in-logging).
+> Early versions of Functions used built-in monitoring, which is no longer recommended. When you're enabling Application Insights integration for such a function app, you must also [disable built-in logging](#disable-built-in-logging).
## Disable built-in logging When you enable Application Insights, disable the built-in logging that uses Azure Storage. The built-in logging is useful for testing with light workloads, but isn't intended for high-load production use. For production monitoring, we recommend Application Insights. If built-in logging is used in production, the logging record might be incomplete because of throttling on Azure Storage.
-To disable built-in logging, delete the `AzureWebJobsDashboard` app setting. For information about how to delete app settings in the Azure portal, see the **Application settings** section of [How to manage a function app](functions-how-to-use-azure-function-app-settings.md#settings). Before you delete the app setting, make sure no existing functions in the same function app use the setting for Azure Storage triggers or bindings.
+To disable built-in logging, delete the `AzureWebJobsDashboard` app setting. For more information about how to delete app settings in the Azure portal, see the **Application settings** section of [How to manage a function app](functions-how-to-use-azure-function-app-settings.md#settings). Before you delete the app setting, ensure that no existing functions in the same function app use the setting for Azure Storage triggers or bindings.
-## Solutions with high volume of telemetry
+## Solutions with high volume of telemetry
-Your function apps can be an essential part of solutions that by nature cause high volumes of telemetry (IoT solutions, event driven based solutions, high load financial systems, integration systems...). In this case, you should consider extra configuration to reduce costs while maintaining observability.
+Function apps are an essential part of solutions that can cause high volumes of telemetry such as IoT solutions, rapid event driven solutions, high load financial systems, and integration systems. In this case, you should consider extra configuration to reduce costs while maintaining observability.
-Depending on how the telemetry generated is going to be consumed, real-time dashboards, alerting, detailed diagnostics, and so on, you will need to define a strategy to reduce the volume of data generated. That strategy will allow you to properly monitor, operate, and diagnose your function apps in production. You can consider the following options:
+The generated telemetry can be consumed in real-time dashboards, alerting, detailed diagnostics, and so on. Depending on how the generated telemetry is going to be consumed, you'll need to define a strategy to reduce the volume of data generated. This strategy will allow you to properly monitor, operate, and diagnose your function apps in production. You can consider the following options:
-* **Use sampling**: as mentioned [earlier](#configure-sampling), it will help to dramatically reduce the volume of telemetry events ingested while maintaining a statistically correct analysis. It could happen that even using sampling you still get high volume of telemetry. Inspect the options that [Adaptive sampling](../azure-monitor/app/sampling.md#configuring-adaptive-sampling-for-aspnet-applications) provides to you, for example set the `maxTelemetryItemsPerSecond` to a value that balances the volume generated with your monitoring needs. Keep in mind that the telemetry sampling is applied per host executing your function app.
++ **Use sampling**: As mentioned [earlier](#configure-sampling), it will help to dramatically reduce the volume of telemetry events ingested while maintaining a statistically correct analysis. It could happen that even using sampling you still a get high volume of telemetry. Inspect the options that [adaptive sampling](../azure-monitor/app/sampling.md#configuring-adaptive-sampling-for-aspnet-applications) provides to you. For example, set the `maxTelemetryItemsPerSecond` to a value that balances the volume generated with your monitoring needs. Keep in mind that the telemetry sampling is applied per host executing your function app.
-* **Default log level**: use `Warning` or `Error` as the default value for all telemetry categories. Now you can decide which [categories](#configure-categories) you want to set at `Information` so you can monitor and diagnose your functions properly.
++ **Default log level**: Use `Warning` or `Error` as the default value for all telemetry categories. Now, you can decide which [categories](#configure-categories) you want to set at `Information` level so that you can monitor and diagnose your functions properly.
-* **Tune your functions telemetry**: with the default log level set to `Error` or `Warning`, no detailed information from each function will be gathered (dependencies, custom metrics, custom events, and traces). For those functions that are key for production monitoring, define an explicit entry for `Function.<YOUR_FUNCTION_NAME>` category and set it to `Information`, so you can gather detailed information. At this point, to avoid gathering [user-generated logs](functions-monitoring.md#writing-to-logs) at `Information` level, set the `Function.<YOUR_FUNCTION_NAME>.User` category to `Error` or `Warning` log level.
++ **Tune your functions telemetry**: With the default log level set to `Error` or `Warning`, no detailed information from each function will be gathered (dependencies, custom metrics, custom events, and traces). For those functions that are key for production monitoring, define an explicit entry for `Function.<YOUR_FUNCTION_NAME>` category and set it to `Information`, so that you can gather detailed information. At this point, to avoid gathering [user-generated logs](functions-monitoring.md#writing-to-logs) at `Information` level, set the `Function.<YOUR_FUNCTION_NAME>.User` category to `Error` or `Warning` log level.
-* **Host.Aggregator category**: as described in [Configure categories](#configure-categories), this category provides aggregated information of function invocations. The information from this category is gathered in Application Insights `customMetrics` table, and it's shown in the function Overview tab in the Azure portal. Depending on how you configure the aggregator, consider that there will be a delay, determined by the `flushTimeout`, in the telemetry gathered. If you set this category to other value different than `Information`, you will stop gathering the data in the `customMetrics` table and will not display metrics in the function Overview tab.
++ **Host.Aggregator category**: As described in [configure categories](#configure-categories), this category provides aggregated information of function invocations. The information from this category is gathered in Application Insights `customMetrics` table, and it's shown in the function **Overview** tab in the Azure portal. Depending on how you configure the aggregator, consider that there will be a delay, determined by the `flushTimeout`, in the telemetry gathered. If you set this category to other value different than `Information`, you'll stop gathering the data in the `customMetrics` table and won't display metrics in the function **Overview** tab.+
+ The following screenshot shows `Host.Aggregator` telemetry data displayed in the function **Overview** tab:
- The following screenshot shows Host.Aggregator telemetry data displayed in the function Overview tab.
:::image type="content" source="media/configure-monitoring/host-aggregator-function-overview.png" alt-text="Screenshot of Host.Aggregator telemetry displayed in function Overview tab." lightbox="media/configure-monitoring/host-aggregator-function-overview-big.png":::
- The following screenshot shows Host.Aggregator telemetry data in Application Insights customMetrics table.
+ The following screenshot shows `Host.Aggregator` telemetry data in Application Insights `customMetrics` table:
+ :::image type="content" source="media/configure-monitoring/host-aggregator-custom-metrics.png" alt-text="Screenshot of Host.Aggregator telemetry in customMetrics Application Insights table." lightbox="media/configure-monitoring/host-aggregator-custom-metrics-big.png":::
-* **Host.Results category**: as described in [Configure categories](#configure-categories), this category provides the runtime-generated logs indicating the success or failure of a function invocation. The information from this category is gathered in the Application Insights `requests` table, and it is shown in the function Monitor tab and in different Application Insights dashboards (Performance, Failures...). If you set this category to other value different than `Information`, you will only gather telemetry generated at the log level defined (or higher), for example, setting it to `error` results in tracking requests data only for failed executions.
++ **Host.Results category**: As described in [configure categories](#configure-categories), this category provides the runtime-generated logs indicating the success or failure of a function invocation. The information from this category is gathered in the Application Insights `requests` table, and it's shown in the function **Monitor** tab and in different Application Insights dashboards (Performance, Failures, and so on). If you set this category to other value different than `Information`, you'll only gather telemetry generated at the log level defined (or higher). For example, setting it to `error` results in tracking requests data only for failed executions.+
+ The following screenshot shows the `Host.Results` telemetry data displayed in the function **Monitor** tab:
- The following screenshot shows the Host.Results telemetry data displayed in the function Monitor tab.
:::image type="content" source="media/configure-monitoring/host-results-function-monitor.png" alt-text="Screenshot of Host.Results telemetry in function Monitor tab." lightbox="media/configure-monitoring/host-results-function-monitor-big.png":::
- The following screenshot shows Host.Results telemetry data displayed in Application Insights Performance dashboard.
+ The following screenshot shows `Host.Results` telemetry data displayed in Application Insights Performance dashboard:
+ :::image type="content" source="media/configure-monitoring/host-results-application-insights.png" alt-text="Screenshot of Host.Results telemetry in Application Insights Performance dashboard." lightbox="media/configure-monitoring/host-results-application-insights-big.png":::
-* **Host.Aggregator vs Host.Results**: both categories provide good insights about function executions, if needed, you can remove the detailed information from one of these categories, so you can use the other for monitoring and alerting.
++ **Host.Aggregator vs Host.Results**: Both categories provide good insights about function executions. If needed, you can remove the detailed information from one of these categories, so that you can use the other for monitoring and alerting. Here's a sample:+ # [v2.x+](#tab/v2) ``` json
Here's a sample:
```
-With this configuration, you will have:
+With this configuration, you'll have:
-* The default value for all functions and telemetry categories is set to `Warning` (including Microsoft and Worker categories) so, by default, all errors and warnings generated by both, the runtime and custom logging, are gathered.
++ The default value for all functions and telemetry categories is set to `Warning` (including Microsoft and Worker categories). So, by default, all errors and warnings generated by runtime and custom logging are gathered.
-* The `Function` category log level is set to `Error`, so for all functions, by default, only exceptions and error logs will be gathered (dependencies, user-generated metrics, and user-generated events will be skipped).
++ The `Function` category log level is set to `Error`, so for all functions, by default, only exceptions and error logs will be gathered (dependencies, user-generated metrics, and user-generated events will be skipped).
-* For the `Host.Aggregator` category, as it is set to `Error` log level, no aggregated information from function invocations will be gathered in the `customMetrics` Application Insights table, and no information about executions counts (total, successful, failed...) will be shown in the function overview dashboard.
++ For the `Host.Aggregator` category, as it is set to `Error` log level, aggregated information from function invocations won't be gathered in the `customMetrics` Application Insights table, and information about executions counts (total, successful, and failed) won't be shown in the function overview dashboard.
-* For the `Host.Results` category, all the host execution information is gathered in the `requests` Application Insights table. All the invocations results will be shown in the function Monitor dashboard and in Application Insights dashboards.
++ For the `Host.Results` category, all the host execution information is gathered in the `requests` Application Insights table. All the invocations results will be shown in the function Monitor dashboard and in Application Insights dashboards.
-* For the function called `Function1`, we have set the log level to `Information` so, for this concrete function, all the telemetry is gathered (dependency, custom metrics, custom events). For the same function, the `Function1.User` category (user-generated traces) is set to `Error`, so only custom error logging will be gathered. Note that per function configuration is not supported in v1.x.
++ For the function called `Function1`, we have set the log level to `Information`. So, for this concrete function, all the telemetry is gathered (dependency, custom metrics, and custom events). For the same function, the `Function1.User` category (user-generated traces) is set to `Error`, so only custom error logging will be gathered.
-* Sampling is configured to send one telemetry item per second per type, excluding the exceptions. This sampling will happen for each server host running our function app, so if we have four instances, this configuration will emit four telemetry items per second per type and all the exceptions that might occur. Note that, for metrics counts such as request rate and exception rate are adjusted to compensate for the sampling rate, so that they show approximately correct values in Metric Explorer.
+ > [!NOTE]
+ > Configuration per function isn't supported in v1.x.
-> [!TIP]
-> Experiment with different configurations to ensure you cover your requirements for logging, monitoring and alerting. Ensure you have detailed diagnostics in case of unexpected errors or malfunctioning.
++ Sampling is configured to send one telemetry item per second per type, excluding the exceptions. This sampling will happen for each server host running our function app. So, if we have four instances, this configuration will emit four telemetry items per second per type and all the exceptions that might occur.
-### Overriding monitoring configuration at runtime
-Finally, there could be situations where you need to quickly change the logging behavior of a certain category in production, and you don't want to make a whole deployment just for a change in the `host.json` file. For such as cases, you can override the [host json values](functions-host-json.md#override-hostjson-values).
+ > [!NOTE]
+ > Metric counts such as request rate and exception rate are adjusted to compensate for the sampling rate, so that they show approximately correct values in Metric Explorer.
+
+> [!TIP]
+> Experiment with different configurations to ensure that you cover your requirements for logging, monitoring, and alerting. Also, ensure that you have detailed diagnostics in case of unexpected errors or malfunctioning.
+## Overriding monitoring configuration at runtime
-To configure these values at App settings level (and avoid redeployment on just host.json changes), you should override specific `host.json` values by creating an equivalent value as an application setting. When the runtime finds an application setting in the format `AzureFunctionsJobHost__path__to__setting`, it overrides the equivalent `host.json` setting located at `path.to.setting` in the JSON. When expressed as an application setting, the dot (`.`) used to indicate JSON hierarchy is replaced by a double underscore (`__`). For example, you can use the below app settings to configure individual function log levels as in `host.json` above.
+Finally, there could be situations where you need to quickly change the logging behavior of a certain category in production, and you don't want to make a whole deployment just for a change in the *host.json* file. For such cases, you can override the [host.json values](functions-host-json.md#override-hostjson-values).
+To configure these values at App settings level (and avoid redeployment on just *host.json* changes), you should override specific `host.json` values by creating an equivalent value as an application setting. When the runtime finds an application setting in the format `AzureFunctionsJobHost__path__to__setting`, it overrides the equivalent `host.json` setting located at `path.to.setting` in the JSON. When expressed as an application setting, the dot (`.`) used to indicate JSON hierarchy is replaced by a double underscore (`__`). For example, you can use the below app settings to configure individual function log levels as in `host.json` above.
| Host.json path | App setting | |-|-|
To configure these values at App settings level (and avoid redeployment on just
| logging.logLevel.Function.Function1 | AzureFunctionsJobHost__logging__logLevel__Function.Function1 | | logging.logLevel.Function.Function1.User | AzureFunctionsJobHost__logging__logLevel__Function.Function1.User | - You can override the settings directly at the Azure portal Function App Configuration blade or by using an Azure CLI or PowerShell script. # [az cli](#tab/v2)
Update-AzFunctionAppSetting -Name MyAppName -ResourceGroupName MyResourceGroupNa
> [!NOTE] > Overriding the `host.json` through changing app settings will restart your function app.
-
+ ## Next steps
-To learn more about monitoring, see:
+For more information about monitoring, see:
+ [Monitor Azure Functions](functions-monitoring.md) + [Analyze Azure Functions telemetry data in Application Insights](analyze-telemetry-data.md) + [Application Insights](/azure/application-insights/) - [host.json]: functions-host-json.md
azure-functions Functions Create Function App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-app-portal.md
Title: Create your first function in the Azure portal description: Learn how to create your first Azure Function for serverless execution using the Azure portal. Previously updated : 03/26/2020- Last updated : 06/10/2022+ # Create your first function in the Azure portal Azure Functions lets you run your code in a serverless environment without having to first create a virtual machine (VM) or publish a web application. In this article, you learn how to use Azure Functions to create a "hello world" HTTP trigger function in the Azure portal.
-We instead recommend that you [develop your functions locally](functions-develop-local.md) and publish to a function app in Azure.
+We recommend that you [develop your functions locally](functions-develop-local.md) and publish to a function app in Azure.
Use one of the following links to get started with your chosen local development environment and language: | Visual Studio Code | Terminal/command prompt | Visual Studio |
Use one of the following links to get started with your chosen local development
[!INCLUDE [functions-portal-language-support](../../includes/functions-portal-language-support.md)]
-## Prerequisites
+## Prerequisites
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
Next, create a function in the new function app.
## <a name="create-function"></a>Create an HTTP trigger function
-1. From the left menu of the **Function App** window, select **Functions**, then select **Create** from the top menu.
-
-1. From the **Create Function** window, leave the Development environment property has **Develop in portal** and select the **HTTP trigger** template.
+1. From the left menu of the **Function App** window, select **Functions**, and then select **Create** from the top menu.
- ![Choose HTTP trigger function](./media/functions-create-first-azure-function/function-app-select-http-trigger.png)
+1. From the **Create Function** window, leave the **Development environment** property as **Develop in portal**, and then select the **HTTP trigger** template.
-1. Under **Template details** use `HttpExample` for **New Function**, choose **Anonymous** from the **[Authorization level](functions-bindings-http-webhook-trigger.md#authorization-keys)** drop-down list, and then select **Create**.
+ :::image type="content" source="./media/functions-create-first-azure-function/function-app-http-trigger.png" alt-text="Screenshot of HTTP trigger function.":::
+
+1. Under **Template details** use `HttpExample` for **New Function**, select **Anonymous** from the **[Authorization level](functions-bindings-http-webhook-trigger.md#authorization-keys)** drop-down list, and then select **Create**.
Azure creates the HTTP trigger function. Now, you can run the new function by sending an HTTP request. ## Test the function
-1. In your new HTTP trigger function, select **Code + Test** from the left menu, then select **Get function URL** from the top menu.
+1. In your new HTTP trigger function, select **Code + Test** from the left menu, and then select **Get function URL** from the top menu.
- ![Select Get function URL](./media/functions-create-first-azure-function/function-app-select-get-function-url.png)
+ :::image type="content" source="./media/functions-create-first-azure-function/function-app-http-example-get-function-url.png" alt-text="Screenshot of Get function URL window.":::
-1. In the **Get function URL** dialog box, select **default** from the drop-down list, and then select the **Copy to clipboard** icon.
+1. In the **Get function URL** dialog, select **default** from the drop-down list, and then select the **Copy to clipboard** icon.
- ![Copy the function URL from the Azure portal](./media/functions-create-first-azure-function/function-app-develop-tab-testing.png)
+ :::image type="content" source="./media/functions-create-first-azure-function/function-app-develop-tab-testing.png" alt-text="Screenshot of Copy the function URL window from the Azure portal.":::
-1. Paste the function URL into your browser's address bar. Add the query string value `?name=<your_name>` to the end of this URL and press Enter to run the request. The browser should display a response message that echoes back your query string value.
+1. Paste the function URL into your browser's address bar. Add the query string value `?name=<your_name>` to the end of this URL and press Enter to run the request. The browser must display a response message that echoes back your query string value.
- If the request URL included an [access key](functions-bindings-http-webhook-trigger.md#authorization-keys) (`?code=...`), it means you choose **Function** instead of **Anonymous** access level when creating the function. In this case, you should instead append `&name=<your_name>`.
+ If the request URL included an [access key](functions-bindings-http-webhook-trigger.md#authorization-keys) (`?code=...`), it means you selected **Function** instead of **Anonymous** access level when creating the function. In this case, you must instead append `&name=<your_name>`.
-1. When your function runs, trace information is written to the logs. To see the trace output, return to the **Code + Test** page in the portal and expand the **Logs** arrow at the bottom of the page. Call your function again to see trace output written to the logs.
+1. When your function runs, trace information is written to the logs. To see the trace output, return to the **Code + Test** page in the portal and expand the **Logs** arrow at the bottom of the page. Call your function again to see the trace output written to the logs.
- :::image type="content" source="media/functions-create-first-azure-function/function-view-logs.png" alt-text="Functions log viewer in the Azure portal":::
+ :::image type="content" source="media/functions-create-first-azure-function/function-app-log-view.png" alt-text="Screenshot of Functions log viewer in the Azure portal.":::
## Clean up resources
azure-functions Functions Create Function Linux Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-linux-custom-image.md
Title: Create Azure Functions on Linux using a custom image description: Learn how to create Azure Functions running on a custom Linux image. Previously updated : 01/20/2021 Last updated : 06/10/2022 -+ zone_pivot_groups: programming-languages-set-functions-full
zone_pivot_groups: programming-languages-set-functions-full
In this tutorial, you create and deploy your code to Azure Functions as a custom Docker container using a Linux base image. You typically use a custom image when your functions require a specific language version or have a specific dependency or configuration that isn't provided by the built-in image. ::: zone pivot="programming-language-other"
-Azure Functions supports any language or runtime using [custom handlers](functions-custom-handlers.md). For some languages, such as the R programming language used in this tutorial, you need to install the runtime or additional libraries as dependencies that require the use of a custom container.
+Azure Functions supports any language or runtime using [custom handlers](functions-custom-handlers.md). For some languages, such as the R programming language used in this tutorial, you need to install the runtime or more libraries as dependencies that require the use of a custom container.
::: zone-end Deploying your function code in a custom Linux container requires [Premium plan](functions-premium-plan.md) or a [Dedicated (App Service) plan](dedicated-plan.md) hosting. Completing this tutorial incurs costs of a few US dollars in your Azure account, which you can minimize by [cleaning-up resources](#clean-up-resources) when you're done.
-You can also use a default Azure App Service container as described on [Create your first function hosted on Linux](./create-first-function-cli-csharp.md?pivots=programming-language-python). Supported base images for Azure Functions are found in the [Azure Functions base images repo](https://hub.docker.com/_/microsoft-azure-functions-base).
+You can also use a default Azure App Service container as described in [Create your first function hosted on Linux](./create-first-function-cli-csharp.md?pivots=programming-language-python). Supported base images for Azure Functions are found in the [Azure Functions base images repo](https://hub.docker.com/_/microsoft-azure-functions-base).
In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Create a function app and Dockerfile using the Azure Functions Core Tools. > * Build a custom image using Docker. > * Publish a custom image to a container registry.
-> * Create supporting resources in Azure for the function app
+> * Create supporting resources in Azure for the function app.
> * Deploy a function app from Docker Hub. > * Add application settings to the function app. > * Enable continuous deployment. > * Enable SSH connections to the container.
-> * Add a Queue storage output binding.
+> * Add a Queue storage output binding.
::: zone-end ::: zone pivot="programming-language-other" > [!div class="checklist"] > * Create a function app and Dockerfile using the Azure Functions Core Tools. > * Build a custom image using Docker. > * Publish a custom image to a container registry.
-> * Create supporting resources in Azure for the function app
+> * Create supporting resources in Azure for the function app.
> * Deploy a function app from Docker Hub. > * Add application settings to the function app. > * Enable continuous deployment. > * Enable SSH connections to the container. ::: zone-end
-You can follow this tutorial on any computer running Windows, macOS, or Linux.
+You can follow this tutorial on any computer running Windows, macOS, or Linux.
[!INCLUDE [functions-requirements-cli](../../includes/functions-requirements-cli.md)] <!Requirements specific to Docker >
+You also need to get a Docker and Docker ID:
+ + [Docker](https://docs.docker.com/install/) + A [Docker ID](https://hub.docker.com/signup)
You can follow this tutorial on any computer running Windows, macOS, or Linux.
## Create and test the local functions project ::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-typescript,programming-language-powershell,programming-language-python"
-In a terminal or command prompt, run the following command for your chosen language to create a function app project in the current folder.
+In a terminal or command prompt, run the following command for your chosen language to create a function app project in the current folder:
::: zone-end ::: zone pivot="programming-language-csharp"
func init --worker-runtime node --language typescript --docker
``` ::: zone-end ::: zone pivot="programming-language-java"
-In an empty folder, run the following command to generate the Functions project from a [Maven archetype](https://maven.apache.org/guides/introduction/introduction-to-archetypes.html).
+In an empty folder, run the following command to generate the Functions project from a [Maven archetype](https://maven.apache.org/guides/introduction/introduction-to-archetypes.html):
# [Bash](#tab/bash) ```bash
The `-DjavaVersion` parameter tells the Functions runtime which version of Java
> [!IMPORTANT] > The `JAVA_HOME` environment variable must be set to the install location of the correct version of the JDK to complete this article.
-Maven asks you for values needed to finish generating the project on deployment.
-Provide the following values when prompted:
+Maven asks you for values needed to finish generating the project on deployment.
+Follow the prompts and provide the following information:
| Prompt | Value | Description | | | -- | -- | | **groupId** | `com.fabrikam` | A value that uniquely identifies your project across all projects, following the [package naming rules](https://docs.oracle.com/javase/specs/jls/se6/html/packages.html#7.7) for Java. | | **artifactId** | `fabrikam-functions` | A value that is the name of the jar, without a version number. |
-| **version** | `1.0-SNAPSHOT` | Choose the default value. |
+| **version** | `1.0-SNAPSHOT` | Select the default value. |
| **package** | `com.fabrikam.functions` | A value that is the Java package for the generated function code. Use the default. | Type `Y` or press Enter to confirm.
-Maven creates the project files in a new folder with a name of _artifactId_, which in this example is `fabrikam-functions`.
+Maven creates the project files in a new folder named _artifactId_, which in this example is `fabrikam-functions`.
::: zone-end ::: zone pivot="programming-language-other"
func init --worker-runtime custom --docker
``` ::: zone-end
-The `--docker` option generates a `Dockerfile` for the project, which defines a suitable custom container for use with Azure Functions and the selected runtime.
+The `--docker` option generates a *Dockerfile* for the project, which defines a suitable custom container for use with Azure Functions and the selected runtime.
::: zone pivot="programming-language-java" Navigate into the project folder:
COPY --from=mcr.microsoft.com/dotnet/core/sdk:3.1 /usr/share/dotnet /usr/share/d
``` ::: zone-end
-Add a function to your project by using the following command, where the `--name` argument is the unique name of your function and the `--template` argument specifies the function's trigger. `func new` creates a C# code file in your project.
+Use the following command to add a function to your project, where the `--name` argument is the unique name of your function and the `--template` argument specifies the function's trigger. `func new` creates a C# code file in your project.
```console func new --name HttpExample --template "HTTP trigger" --authlevel anonymous ``` ::: zone-end
-Add a function to your project by using the following command, where the `--name` argument is the unique name of your function and the `--template` argument specifies the function's trigger. `func new` creates a subfolder matching the function name that contains a configuration file named *function.json*.
+Use the following command to add a function to your project, where the `--name` argument is the unique name of your function and the `--template` argument specifies the function's trigger. `func new` creates a subfolder matching the function name that contains a configuration file named *function.json*.
```console func new --name HttpExample --template "HTTP trigger" --authlevel anonymous ``` ::: zone-end
-In a text editor, create a file in the project folder named *handler.R*. Add the following as its content.
+In a text editor, create a file in the project folder named *handler.R*. Add the following code as its content:
```r library(httpuv)
In *host.json*, modify the `customHandler` section to configure the custom handl
``` ::: zone-end
-To test the function locally, start the local Azure Functions runtime host in the root of the project folder:
+To test the function locally, start the local Azure Functions runtime host in the root of the project folder.
::: zone pivot="programming-language-csharp" ```console func start ``` ::: zone-end ```console func start ```
mvn azure-functions:run
R -e "install.packages('httpuv', repos='http://cran.rstudio.com/')" func start ```
-Once you see the `HttpExample` endpoint appear in the output, navigate to `http://localhost:7071/api/HttpExample?name=Functions`. The browser should display a "hello" message that echoes back `Functions`, the value supplied to the `name` query parameter.
+After you see the `HttpExample` endpoint appear in the output, navigate to `http://localhost:7071/api/HttpExample?name=Functions`. The browser must display a "hello" message that echoes back `Functions`, the value supplied to the `name` query parameter.
-Use **Ctrl**-**C** to stop the host.
+Press **Ctrl**+**C** to stop the host.
## Build the container image and test locally ::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-java,programming-language-typescript"
-(Optional) Examine the *Dockerfile* in the root of the project folder. The Dockerfile describes the required environment to run the function app on Linux. The complete list of supported base images for Azure Functions can be found in the [Azure Functions base image page](https://hub.docker.com/_/microsoft-azure-functions-base).
+(Optional) Examine the *Dockerfile* in the root of the project folder. The *Dockerfile* describes the required environment to run the function app on Linux. The complete list of supported base images for Azure Functions can be found in the [Azure Functions base image page](https://hub.docker.com/_/microsoft-azure-functions-base).
::: zone-end ::: zone pivot="programming-language-other"
-Examine the *Dockerfile* in the root of the project folder. The Dockerfile describes the required environment to run the function app on Linux. Custom handler applications use the `mcr.microsoft.com/azure-functions/dotnet:3.0-appservice` image as its base.
+Examine the *Dockerfile* in the root of the project folder. The *Dockerfile* describes the required environment to run the function app on Linux. Custom handler applications use the `mcr.microsoft.com/azure-functions/dotnet:3.0-appservice` image as its base.
-Modify the *Dockerfile* to install R. Replace the contents of *Dockerfile* with the following.
+Modify the *Dockerfile* to install R. Replace the contents of the *Dockerfile* with the following code:
```dockerfile FROM mcr.microsoft.com/azure-functions/dotnet:3.0-appservice
COPY . /home/site/wwwroot
``` ::: zone-end
-In the root project folder, run the [docker build](https://docs.docker.com/engine/reference/commandline/build/) command, and provide a name, `azurefunctionsimage`, and tag, `v1.0.0`. Replace `<DOCKER_ID>` with your Docker Hub account ID. This command builds the Docker image for the container.
+In the root project folder, run the [docker build](https://docs.docker.com/engine/reference/commandline/build/) command, provide a name as `azurefunctionsimage`, and tag as `v1.0.0`. Replace `<DOCKER_ID>` with your Docker Hub account ID. This command builds the Docker image for the container.
```console docker build --tag <DOCKER_ID>/azurefunctionsimage:v1.0.0 . ``` When the command completes, you can run the new container locally.
-
-To test the build, run the image in a local container using the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command, replacing again `<DOCKER_ID` with your Docker ID and adding the ports argument, `-p 8080:80`:
+
+To test the build, run the image in a local container using the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command, replace `<docker_id>` again with your Docker Hub account ID, and add the ports argument as `-p 8080:80`:
```console docker run -p 8080:80 -it <docker_id>/azurefunctionsimage:v1.0.0
docker run -p 8080:80 -it <docker_id>/azurefunctionsimage:v1.0.0
::: zone pivot="programming-language-csharp" # [In-process](#tab/in-process)
-After the image starts in the local container, browse to `http://localhost:8080/api/HttpExample?name=Functions`, which should display the same "hello" message as before. Because the HTTP triggered function you created uses anonymous authorization, you can call the function running in the container without having to obtain an access key. To learn more, see [authorization keys].
+After the image starts in the local container, browse to `http://localhost:8080/api/HttpExample?name=Functions`, which must display the same "hello" message as before. Because the HTTP triggered function you created uses anonymous authorization, you can call the function running in the container without having to obtain an access key. For more information, see [authorization keys].
# [Isolated process](#tab/isolated-process)
-After the image starts in the local container, browse to `http://localhost:8080/api/HttpExample`, which should display the same greeting message as before. Because the HTTP triggered function you created uses anonymous authorization, you can call the function running in the container without having to obtain an access key. To learn more, see [authorization keys].
+After the image starts in the local container, browse to `http://localhost:8080/api/HttpExample`, which must display the same greeting message as before. Because the HTTP triggered function you created uses anonymous authorization, you can call the function running in the container without having to obtain an access key. For more information, see [authorization keys].
::: zone-end
-After the image starts in the local container, browse to `http://localhost:8080/api/HttpExample?name=Functions`, which should display the same "hello" message as before. Because the HTTP triggered function you created uses anonymous authorization, you can call the function running in the container without having to obtain an access key. To learn more, see [authorization keys].
+After the image starts in the local container, browse to `http://localhost:8080/api/HttpExample?name=Functions`, which must display the same "hello" message as before. Because the HTTP triggered function you created uses anonymous authorization, you can call the function running in the container without having to obtain an access key. For more information, see [authorization keys].
::: zone-end
-After you've verified the function app in the container, stop docker with **Ctrl**+**C**.
+After verifying the function app in the container, press **Ctrl**+**C** to stop the docker.
## Push the image to Docker Hub Docker Hub is a container registry that hosts images and provides image and container services. To share your image, which includes deploying to Azure, you must push it to a registry.
-1. If you haven't already signed in to Docker, do so with the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command, replacing `<docker_id>` with your Docker ID. This command prompts you for your username and password. A "Login Succeeded" message confirms that you're signed in.
+1. If you haven't already signed in to Docker, do so with the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command, replacing `<docker_id>` with your Docker Hub account ID. This command prompts you for your username and password. A "Login Succeeded" message confirms that you're signed in.
```console docker login ```
-
-1. After you've signed in, push the image to Docker Hub by using the [docker push](https://docs.docker.com/engine/reference/commandline/push/) command, again replacing `<docker_id>` with your Docker ID.
+
+1. After you've signed in, push the image to Docker Hub by using the [docker push](https://docs.docker.com/engine/reference/commandline/push/) command, again replace the `<docker_id>` with your Docker Hub account ID.
```console docker push <docker_id>/azurefunctionsimage:v1.0.0 ```
-1. Depending on your network speed, pushing the image the first time might take a few minutes (pushing subsequent changes is much faster). While you're waiting, you can proceed to the next section and create Azure resources in another terminal.
+1. Depending on your network speed, pushing the image for the first time might take a few minutes (pushing subsequent changes is much faster). While you're waiting, you can proceed to the next section and create Azure resources in another terminal.
## Create supporting Azure resources for your function Before you can deploy your function code to Azure, you need to create three resources: -- A [resource group](../azure-resource-manager/management/overview.md), which is a logical container for related resources.-- A [Storage account](../storage/common/storage-account-create.md), which is used to maintain state and other information about your functions.-- A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources.
+* A [resource group](../azure-resource-manager/management/overview.md), which is a logical container for related resources.
+* A [Storage account](../storage/common/storage-account-create.md), which is used to maintain state and other information about your functions.
+* A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources.
Use the following commands to create these items. Both Azure CLI and PowerShell are supported.
-1. If you haven't done so already, sign in to Azure:
+1. If you haven't done already, sign in to Azure.
# [Azure CLI](#tab/azure-cli) ```azurecli
Use the following commands to create these items. Both Azure CLI and PowerShell
-1. Create a resource group named `AzureFunctionsContainers-rg` in your chosen region:
+1. Create a resource group named `AzureFunctionsContainers-rg` in your chosen region.
# [Azure CLI](#tab/azure-cli)
Use the following commands to create these items. Both Azure CLI and PowerShell
-1. Create a general-purpose storage account in your resource group and region:
+1. Create a general-purpose storage account in your resource group and region.
# [Azure CLI](#tab/azure-cli)
Use the following commands to create these items. Both Azure CLI and PowerShell
- In the previous example, replace `<STORAGE_NAME>` with a name that is appropriate to you and unique in Azure Storage. Names must contain three to 24 characters numbers and lowercase letters only. `Standard_LRS` specifies a general-purpose account, which is [supported by Functions](storage-considerations.md#storage-account-requirements).
+ In the previous example, replace `<STORAGE_NAME>` with a name that is appropriate to you and unique in Azure Storage. Storage names must contain 3 to 24 characters numbers and lowercase letters only. `Standard_LRS` specifies a general-purpose account [supported by Functions](storage-considerations.md#storage-account-requirements).
1. Use the command to create a Premium plan for Azure Functions named `myPremiumPlan` in the **Elastic Premium 1** pricing tier (`--sku EP1`), in your `<REGION>`, and in a Linux container (`--is-linux`).
Use the following commands to create these items. Both Azure CLI and PowerShell
New-AzFunctionAppPlan -ResourceGroupName AzureFunctionsContainers-rg -Name MyPremiumPlan -Location <REGION> -Sku EP1 -WorkerType Linux ```
- We use the Premium plan here, which can scale as needed. To learn more about hosting, see [Azure Functions hosting plans comparison](functions-scale.md). To calculate costs, see the [Functions pricing page](https://azure.microsoft.com/pricing/details/functions/).
+ We use the Premium plan here, which can scale as needed. For more information about hosting, see [Azure Functions hosting plans comparison](functions-scale.md). For more information on how to calculate costs, see the [Functions pricing page](https://azure.microsoft.com/pricing/details/functions/).
The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
Use the following commands to create these items. Both Azure CLI and PowerShell
A function app on Azure manages the execution of your functions in your hosting plan. In this section, you use the Azure resources from the previous section to create a function app from an image on Docker Hub and configure it with a connection string to Azure Storage.
-1. Create a functions app using the following command:
+1. Create a function app using the following command:
# [Azure CLI](#tab/azure-cli) ```azurecli az functionapp create --name <APP_NAME> --storage-account <STORAGE_NAME> --resource-group AzureFunctionsContainers-rg --plan myPremiumPlan --deployment-container-image-name <DOCKER_ID>/azurefunctionsimage:v1.0.0 ```
- In the [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command, the *deployment-container-image-name* parameter specifies the image to use for the function app. You can use the [az functionapp config container show](/cli/azure/functionapp/config/container#az-functionapp-config-container-show) command to view information about the image used for deployment. You can also use the [az functionapp config container set](/cli/azure/functionapp/config/container#az-functionapp-config-container-set) command to deploy from a different image. NOTE: If you are using a custom container registry then the *deployment-container-image-name* parameter will refer to the registry URL.
+ In the [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command, the *deployment-container-image-name* parameter specifies the image to use for the function app. You can use the [az functionapp config container show](/cli/azure/functionapp/config/container#az-functionapp-config-container-show) command to view information about the image used for deployment. You can also use the [az functionapp config container set](/cli/azure/functionapp/config/container#az-functionapp-config-container-set) command to deploy from a different image.
+
+ > [!NOTE]
+ > If you're using a custom container registry, then the *deployment-container-image-name* parameter will refer to the registry URL.
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
A function app on Azure manages the execution of your functions in your hosting
```
- In this example, replace `<STORAGE_NAME>` with the name you used in the previous section for the storage account. Also replace `<APP_NAME>` with a globally unique name appropriate to you, and `<DOCKER_ID>` with your DockerHub ID. When deploying from a custom container registry, use the `deployment-container-image-name` parameter to indicate the URL of the registry.
+ In this example, replace `<STORAGE_NAME>` with the name you used in the previous section for the storage account. Also, replace `<APP_NAME>` with a globally unique name appropriate to you, and `<DOCKER_ID>` with your Docker Hub account ID. When you're deploying from a custom container registry, use the `deployment-container-image-name` parameter to indicate the URL of the registry.
> [!TIP]
- > You can use the [`DisableColor` setting](functions-host-json.md#console) in the host.json file to prevent ANSI control characters from being written to the container logs.
+ > You can use the [`DisableColor` setting](functions-host-json.md#console) in the *host.json* file to prevent ANSI control characters from being written to the container logs.
1. Use the following command to get the connection string for the storage account you created:
A function app on Azure manages the execution of your functions in your hosting
az storage account show-connection-string --resource-group AzureFunctionsContainers-rg --name <STORAGE_NAME> --query connectionString --output tsv ```
- The connection string for the storage account is returned by using the [az storage account show-connection-string](/cli/azure/storage/account) command.
+ The connection string for the storage account is returned by using the [az storage account show-connection-string](/cli/azure/storage/account) command.
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
A function app on Azure manages the execution of your functions in your hosting
- Replace `<STORAGE_NAME>` with the name of the storage account you created previously.
+ Replace `<STORAGE_NAME>` with the name of the storage account you created earlier.
-1. Add this setting to the function app by using the following command:
+1. Use the following command to add the setting to the function app:
# [Azure CLI](#tab/azure-cli) ```azurecli az functionapp config appsettings set --name <APP_NAME> --resource-group AzureFunctionsContainers-rg --settings AzureWebJobsStorage=<CONNECTION_STRING> ```
- The [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings#az-functionapp-config-ppsettings-set) command creates the setting.
+ The [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings#az-functionapp-config-ppsettings-set) command creates the setting.
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
A function app on Azure manages the execution of your functions in your hosting
In this command, replace `<APP_NAME>` with the name of your function app and `<CONNECTION_STRING>` with the connection string from the previous step. The connection should be a long encoded string that begins with `DefaultEndpointProtocol=`. - 1. The function can now use this connection string to access the storage account.
-> [!NOTE]
-> If you publish your custom image to a private container registry, you should use environment variables in the Dockerfile for the connection string instead. For more information, see the [ENV instruction](https://docs.docker.com/engine/reference/builder/#env). You should also set the variables `DOCKER_REGISTRY_SERVER_USERNAME` and `DOCKER_REGISTRY_SERVER_PASSWORD`. To use the values, then, you must rebuild the image, push the image to the registry, and then restart the function app on Azure.
+> [!NOTE]
+> If you publish your custom image to a private container registry, you must use environment variables in the *Dockerfile* for the connection string instead. For more information, see the [ENV instruction](https://docs.docker.com/engine/reference/builder/#env). You must also set the `DOCKER_REGISTRY_SERVER_USERNAME` and `DOCKER_REGISTRY_SERVER_PASSWORD` variables. To use the values, you must rebuild the image, push the image to the registry, and then restart the function app on Azure.
## Verify your functions on Azure With the image deployed to your function app in Azure, you can now invoke the function as before through HTTP requests.
-In your browser, navigate to a URL like the following:
+In your browser, navigate to the following URL:
::: zone pivot="programming-language-java,programming-language-javascript,programming-language-typescript,programming-language-powershell,programming-language-python" `https://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions`
In your browser, navigate to a URL like the following:
:::zone-end
-Replace `<APP_NAME>` with the name of your function app. When you navigate to this URL, the browser should display similar output as when you ran the function locally.
+Replace `<APP_NAME>` with the name of your function app. When you navigate to this URL, the browser must display similar output as when you ran the function locally.
## Enable continuous deployment to Azure You can enable Azure Functions to automatically update your deployment of an image whenever you update the image in the registry.
-1. Enable continuous deployment and get the webhook URL by using the following commands:
+1. Use the following command to enable continuous deployment and to get the webhook URL:
# [Azure CLI](#tab/azure-cli) ```azurecli
You can enable Azure Functions to automatically update your deployment of an ima
Get-AzWebAppContainerContinuousDeploymentUrl -Name <APP_NAME> -ResourceGroupName AzureFunctionsContainers-rg ```
- The `DOCKER_ENABLE_CI` application setting controls whether continuous deployment is enabled from the container repository. The [Get-AzWebAppContainerContinuousDeploymentUrl](/powershell/module/az.websites/get-azwebappcontainercontinuousdeploymenturl) cmdlet returns the URL of the deployment webhook.
+ The `DOCKER_ENABLE_CI` application setting controls whether continuous deployment is enabled from the container repository. The [Get-AzWebAppContainerContinuousDeploymentUrl](/powershell/module/az.websites/get-azwebappcontainercontinuousdeploymenturl) cmdlet returns the URL of the deployment webhook.
- As before, replace `<APP_NAME>` with your function app name.
+ As before, replace `<APP_NAME>` with your function app name.
1. Copy the deployment webhook URL to the clipboard.
-1. Open [Docker Hub](https://hub.docker.com/), sign in, and select **Repositories** on the nav bar. Locate and select image, select the **Webhooks** tab, specify a **Webhook name**, paste your URL in **Webhook URL**, and then select **Create**:
+1. Open [Docker Hub](https://hub.docker.com/), sign in, and select **Repositories** on the navigation bar. Locate and select the image, select the **Webhooks** tab, specify a **Webhook name**, paste your URL in **Webhook URL**, and then select **Create**.
- ![Add the webhook in your DockerHub repo](./media/functions-create-function-linux-custom-image/dockerhub-set-continuous-webhook.png)
+ :::image type="content" source="./media/functions-create-function-linux-custom-image/dockerhub-set-continuous-webhook.png" alt-text="Screenshot showing how to add the webhook in your Docker Hub window.":::
1. With the webhook set, Azure Functions redeploys your image whenever you update it in Docker Hub. ## Enable SSH connections
-SSH enables secure communication between a container and a client. With SSH enabled, you can connect to your container using App Service Advanced Tools (Kudu). To make it easy to connect to your container using SSH, Azure Functions provides a base image that has SSH already enabled. You need only edit your Dockerfile, then rebuild and redeploy the image. You can then connect to the container through the Advanced Tools (Kudu)
+SSH enables secure communication between a container and a client. With SSH enabled, you can connect to your container using App Service Advanced Tools (Kudu). For easy connection to your container using SSH, Azure Functions provides a base image that has SSH already enabled. You only need to edit your *Dockerfile*, then rebuild, and redeploy the image. You can then connect to the container through the Advanced Tools (Kudu).
-1. In your Dockerfile, append the string `-appservice` to the base image in your `FROM` instruction:
+1. In your *Dockerfile*, append the string `-appservice` to the base image in your `FROM` instruction.
::: zone pivot="programming-language-csharp" ```Dockerfile
SSH enables secure communication between a container and a client. With SSH enab
``` ::: zone-end
-1. Rebuild the image by using the `docker build` command again, replacing `<docker_id>` with your Docker ID:
+1. Rebuild the image by using the `docker build` command again, replace the `<docker_id>` with your Docker Hub account ID.
```console docker build --tag <docker_id>/azurefunctionsimage:v1.0.0 . ```
-
-1. Push the updated image to Docker Hub, which should take considerably less time than the first push only the updated segments of the image need to be uploaded.
+
+1. Push the updated image to Docker Hub, which should take considerably less time than the first push. Only the updated segments of the image need to be uploaded now.
```console docker push <docker_id>/azurefunctionsimage:v1.0.0
SSH enables secure communication between a container and a client. With SSH enab
1. Azure Functions automatically redeploys the image to your functions app; the process takes place in less than a minute.
-1. In a browser, open `https://<app_name>.scm.azurewebsites.net/`, replacing `<app_name>` with your unique name. This URL is the Advanced Tools (Kudu) endpoint for your function app container.
+1. In a browser, open `https://<app_name>.scm.azurewebsites.net/` and replace `<app_name>` with your unique name. This URL is the Advanced Tools (Kudu) endpoint for your function app container.
-1. Sign in to your Azure account, and then select the **SSH** to establish a connection with the container. Connecting may take a few moments if Azure is still updating the container image.
+1. Sign in to your Azure account, and then select the **SSH** to establish a connection with the container. Connecting might take a few moments if Azure is still updating the container image.
-1. After a connection is established with your container, run the `top` command to view the currently running processes.
+1. After a connection is established with your container, run the `top` command to view the currently running processes.
- ![Linux top command running in an SSH session](media/functions-create-function-linux-custom-image/linux-custom-kudu-ssh-top.png)
+ :::image type="content" source="media/functions-create-function-linux-custom-image/linux-custom-kudu-ssh-top.png" alt-text="Screenshot that shows Linux top command running in an SSH session.":::
::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-typescript,programming-language-powershell,programming-language-python,programming-language-java" ## Write to Azure Queue Storage
-Azure Functions lets you connect your functions to other Azure services and resources without having to write your own integration code. These *bindings*, which represent both input and output, are declared within the function definition. Data from bindings is provided to the function as parameters. A *trigger* is a special type of input binding. Although a function has only one trigger, it can have multiple input and output bindings. To learn more, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
+Azure Functions lets you connect your functions to other Azure services and resources without having to write your own integration code. These *bindings*, which represent both input and output, are declared within the function definition. Data from bindings is provided to the function as parameters. A *trigger* is a special type of input binding. Although a function has only one trigger, it can have multiple input and output bindings. For more information, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
This section shows you how to integrate your function with an Azure Queue Storage. The output binding that you add to this function writes data from an HTTP request to a message in the queue.
This section shows you how to integrate your function with an Azure Queue Storag
With the queue binding defined, you can now update your function to write messages to the queue using the binding parameter. ::: zone-end [!INCLUDE [functions-add-output-binding-python](../../includes/functions-add-output-binding-python.md)] ::: zone-end
With the queue binding defined, you can now update your function to write messag
::: zone-end ::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-typescript,programming-language-powershell,programming-language-python,programming-language-java"
-### Update the image in the registry
+## Update the image in the registry
-1. In the root folder, run `docker build` again, and this time update the version in the tag to `v1.0.1`. As before, replace `<docker_id>` with your Docker Hub account ID:
+1. In the root folder, run `docker build` again, and this time update the version in the tag to `v1.0.1`. As before, replace `<docker_id>` with your Docker Hub account ID.
```console docker build --tag <docker_id>/azurefunctionsimage:v1.0.1 . ```
-
-1. Push the updated image back to the repository with `docker push`:
+
+1. Push the updated image back to the repository with `docker push`.
```console docker push <docker_id>/azurefunctionsimage:v1.0.1
With the queue binding defined, you can now update your function to write messag
## View the message in the Azure Storage queue
-In a browser, use the same URL as before to invoke your function. The browser should display the same response as before, because you didn't modify that part of the function code. The added code, however, wrote a message using the `name` URL parameter to the `outqueue` storage queue.
+In a browser, use the same URL as before to invoke your function. The browser must display the same response as before, because you didn't modify that part of the function code. The added code, however, wrote a message using the `name` URL parameter to the `outqueue` storage queue.
[!INCLUDE [functions-add-output-binding-view-queue-cli](../../includes/functions-add-output-binding-view-queue-cli.md)]
In a browser, use the same URL as before to invoke your function. The browser sh
If you want to continue working with Azure Function using the resources you created in this tutorial, you can leave all those resources in place. Because you created a Premium Plan for Azure Functions, you'll incur one or two USD per day in ongoing costs.
-To avoid ongoing costs, delete the `AzureFunctionsContainer-rg` resource group to clean up all the resources in that group:
+To avoid ongoing costs, delete the `AzureFunctionsContainers-rg` resource group to clean up all the resources in that group:
```azurecli
-az group delete --name AzureFunctionsContainer-rg
+az group delete --name AzureFunctionsContainers-rg
``` ## Next steps
azure-functions Functions Create Scheduled Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-scheduled-function.md
Title: Create a function in Azure that runs on a schedule description: Learn how to use the Azure portal to create a function that runs based on a schedule that you define.- ms.assetid: ba50ee47-58e0-4972-b67b-828f2dc48701 Previously updated : 04/16/2020- Last updated : 06/10/2022+ # Create a function in the Azure portal that runs on a schedule
Learn how to use the Azure portal to create a function that runs [serverless](ht
To complete this tutorial:
-+ If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+Ensure that you have an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create a function app
To complete this tutorial:
Your new function app is ready to use. Next, you'll create a function in the new function app. <a name="create-function"></a> ## Create a timer triggered function
-1. In your function app, select **Functions**, and then select **+ Add**
+1. In your function app, select **Functions**, and then select **+ Create**.
+
+ :::image type="content" source="./media/functions-create-scheduled-function/function-create-function.png" alt-text="Screenshot of adding a function in the Azure portal." border="true":::
- :::image type="content" source="./media/functions-create-scheduled-function/function-add-function.png" alt-text="Add a function in the Azure portal." border="true":::
+1. Select the **Timer trigger** template.
-1. Select the **Timer trigger** template.
+ :::image type="content" source="./media/functions-create-scheduled-function/function-select-timer-trigger-template.png" alt-text="Screenshot of select the timer trigger page in the Azure portal." border="true":::
- :::image type="content" source="./media/functions-create-scheduled-function/function-select-timer-trigger.png" alt-text="Select the timer trigger in the Azure portal." border="true":::
+1. Configure the new trigger with the settings as specified in the table below the image, and then select **Create**.
-1. Configure the new trigger with the settings as specified in the table below the image, and then select **Create Function**.
+ :::image type="content" source="./media/functions-create-scheduled-function/function-configure-timer-trigger-new.png" alt-text="Screenshot that shows the New Function page with the Timer Trigger template selected." border="true":::
- :::image type="content" source="./media/functions-create-scheduled-function/function-configure-timer-trigger.png" alt-text="Screenshot shows the New Function page with the Timer Trigger template selected." border="true":::
-
| Setting | Suggested value | Description | |||| | **Name** | Default | Defines the name of your timer triggered function. |
Your new function app is ready to use. Next, you'll create a function in the new
## Test the function
-1. In your function, select **Code + Test** and expand the logs.
+1. In your function, select **Code + Test** and expand the **Logs**.
- :::image type="content" source="./media/functions-create-scheduled-function/function-test-timer-trigger.png" alt-text="Test the timer trigger in the Azure portal." border="true":::
+ :::image type="content" source="./media/functions-create-scheduled-function/function-code-test-timer-trigger.png" alt-text="Screenshot of the Test the timer trigger page in the Azure portal." border="true":::
1. Verify execution by viewing the information written to the logs.
- :::image type="content" source="./media/functions-create-scheduled-function/function-view-timer-logs.png" alt-text="View the timer trigger in the Azure portal." border="true":::
+ :::image type="content" source="./media/functions-create-scheduled-function/function-timer-logs-view.png" alt-text="Screenshot showing the View the timer trigger page in the Azure portal." border="true":::
Now, you change the function's schedule so that it runs once every hour instead of every minute. ## Update the timer schedule
-1. In your function, select **Integration**. Here, you define input and output bindings for your function and also set the schedule.
+1. In your function, select **Integration**. Here, you define the input and output bindings for your function and also set the schedule.
1. Select **Timer (myTimer)**.
- :::image type="content" source="./media/functions-create-scheduled-function/function-update-timer-schedule.png" alt-text="Update the timer schedule in the Azure portal." border="true":::
+ :::image type="content" source="./media/functions-create-scheduled-function/function-update-timer-schedule-new.png" alt-text="Screenshot of Update the timer schedule page in the Azure portal." border="true":::
1. Update the **Schedule** value to `0 0 */1 * * *`, and then select **Save**.
- :::image type="content" source="./media/functions-create-scheduled-function/function-edit-timer-schedule.png" alt-text="Update function timer schedule in the Azure portal." border="true":::
+ :::image type="content" source="./media/functions-create-scheduled-function/function-edit-timer-schedule.png" alt-text="Screenshot of the Update function timer schedule page in the Azure portal." border="true":::
You now have a function that runs once every hour, on the hour.
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
jobs:
popd - name: 'Run Azure Functions Action' uses: Azure/functions-action@v1
- id: fa
with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} package: '${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}/output'
jobs:
popd - name: 'Run Azure Functions Action' uses: Azure/functions-action@v1
- id: fa
with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} package: '${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}/output'
jobs:
popd - name: 'Run Azure Functions Action' uses: Azure/functions-action@v1
- id: fa
with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} package: './${{ env.POM_XML_DIRECTORY }}/target/azure-functions/${{ env.POM_FUNCTIONAPP_NAME }}'
jobs:
popd - name: 'Run Azure Functions Action' uses: Azure/functions-action@v1
- id: fa
with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} package: './${{ env.POM_XML_DIRECTORY }}/target/azure-functions/${{ env.POM_FUNCTIONAPP_NAME }}'
jobs:
popd - name: 'Run Azure Functions Action' uses: Azure/functions-action@v1
- id: fa
with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} package: ${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}
jobs:
popd - name: 'Run Azure Functions Action' uses: Azure/functions-action@v1
- id: fa
with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} package: ${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}
jobs:
popd - name: 'Run Azure Functions Action' uses: Azure/functions-action@v1
- id: fa
with: app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }} package: ${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
Title: Azure Functions runtime versions overview
description: Azure Functions supports multiple versions of the runtime. Learn the differences between them and how to choose the one that's right for you. Previously updated : 01/22/2022 Last updated : 06/24/2022 zone_pivot_groups: programming-languages-set-functions
zone_pivot_groups: programming-languages-set-functions
| 2.x | GA | Supported for [legacy version 2.x apps](#pinning-to-version-20). This version is in maintenance mode, with enhancements provided only in later versions.| | 1.x | GA | Recommended only for C# apps that must use .NET Framework and only supports development in the Azure portal, Azure Stack Hub portal, or locally on Windows computers. This version is in maintenance mode, with enhancements provided only in later versions. |
+> [!IMPORTANT]
+> Beginning on December 3, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime can no longer be supported. Before that time, please test, verify, and migrate your function apps to version 4.x of the Functions runtime. End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these runtime versions. This requirement affects all Azure Functions runtime languages.
+>Functions version 1.x is still supported for C# function apps that require the .NET Framework. Preview support is now available in Functions 4.x to [run C# functions on .NET Framework 4.8](dotnet-isolated-process-guide.md#supported-versions).
+ This article details some of the differences between these versions, how you can create each version, and how to change the version on which your functions run. [!INCLUDE [functions-support-levels](../../includes/functions-support-levels.md)]
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Auto-instrumentation allows you to enable application monitoring with Application Insights without changing your code.
-Application Insights is integrated with various resource providers and works on different environments. In essence, all you have to do is enable and - in some cases - configure the agent, which will collect the telemetry automatically. In no time, you'll see the metrics, requests, and dependencies in your Application Insights resource, which will allow you to spot the source of potential problems before they occur, and analyze the root cause with end-to-end transaction view.
+Application Insights is integrated with various resource providers and works on different environments. In essence, all you have to do is enable and - in some cases - configure the agent, which will collect the telemetry automatically. In no time, you'll see the metrics, requests, and dependencies in your Application Insights resource. This telemetry will allow you to spot the source of potential problems before they occur, and analyze the root cause with end-to-end transaction view.
> [!NOTE] > Auto-instrumentation used to be known as "codeless attach" before October 2021.
As we're adding new integrations, the auto-instrumentation capability matrix bec
|Azure Functions - basic | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | |Azure Functions - dependencies | Not supported | Not supported | Public Preview | Not supported | Through [extension](monitor-functions.md#distributed-tracing-for-python-function-apps) | |Azure Spring Cloud | Not supported | Not supported | GA | Not supported | Not supported |
-|Azure Kubernetes Service | N/A | Not supported | Through agent | Not supported | Not supported |
+|Azure Kubernetes Service (AKS) | N/A | Not supported | Through agent | Not supported | Not supported |
|Azure VMs Windows | Public Preview | Public Preview | Through agent | Not supported | Not supported | |On-Premises VMs Windows | GA, opt-in | Public Preview | Through agent | Not supported | Not supported | |Standalone agent - any env. | Not supported | Not supported | GA | Not supported | Not supported |
The basic monitoring for Azure Functions is enabled by default to collect log, p
### Java Application monitoring for Java apps running in Azure Spring Cloud is integrated into the portal, you can enable Application Insights directly from the Azure portal, both for the existing and newly created Azure Spring Cloud resources.
-## Azure Kubernetes Service
+## Azure Kubernetes Service (AKS)
-Codeless instrumentation of Azure Kubernetes Service is currently available for Java applications through the [standalone agent](./java-in-process-agent.md).
+Codeless instrumentation of Azure Kubernetes Service (AKS) is currently available for Java applications through the [standalone agent](./java-in-process-agent.md).
## Azure Windows VMs and virtual machine scale set
-Auto-instrumentation for Azure VMs and virtual machine scale set is available for [.NET](./azure-vm-vmss-apps.md) and [Java](./java-in-process-agent.md) - this experience is not integrated into the portal. The monitoring is enabled through a few steps with a stand-alone solution and does not require any code changes.
+Auto-instrumentation for Azure VMs and virtual machine scale set is available for [.NET](./azure-vm-vmss-apps.md) and [Java](./java-in-process-agent.md) - this experience isn't integrated into the portal. The monitoring is enabled through a few steps with a stand-alone solution and doesn't require any code changes.
## On-premises servers You can easily enable monitoring for your [on-premises Windows servers for .NET applications](./status-monitor-v2-overview.md) and for [Java apps](./java-in-process-agent.md).
azure-monitor Data Model Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-context.md
# Telemetry context: Application Insights data model
-Every telemetry item may have a strongly typed context fields. Every field enables a specific monitoring scenario. Use the custom properties collection to store custom or application-specific contextual information.
+Every telemetry item may have a strongly typed context field. Every field enables a specific monitoring scenario. Use the custom properties collection to store custom or application-specific contextual information.
## Application version
Max length: 1024
## Client IP address
-The IP address of the client device. IPv4 and IPv6 are supported. When telemetry is sent from a service, the location context is about the user that initiated the operation in the service. Application Insights extract the geo-location information from the client IP and then truncate it. So client IP by itself cannot be used as end-user identifiable information.
+The IP address of the client device. IPv4 and IPv6 are supported. When telemetry is sent from a service, the location context is about the user that initiated the operation in the service. Application Insights extract the geo-location information from the client IP and then truncate it. So client IP by itself can't be used as end-user identifiable information.
Max length: 46
Max length: 64
## Operation ID
-A unique identifier of the root operation. This identifier allows to group telemetry across multiple components. See [telemetry correlation](./correlation.md) for details. The operation id is created by either a request or a page view. All other telemetry sets this field to the value for the containing request or page view.
+A unique identifier of the root operation. This identifier allows grouping telemetry across multiple components. See [telemetry correlation](./correlation.md) for details. The operation ID is created by either a request or a page view. All other telemetry sets this field to the value for the containing request or page view.
Max length: 128
Name of synthetic source. Some telemetry from the application may represent synt
Max length: 1024
-## Session id
+## Session ID
Session ID - the instance of the user's interaction with the app. Information in the session context fields is always about the end user. When telemetry is sent from a service, the session context is about the user that initiated the operation in the service. Max length: 64
-## Anonymous user id
+## Anonymous user ID
-Anonymous user ID. Represents the end user of the application. When telemetry is sent from a service, the user context is about the user that initiated the operation in the service.
+Anonymous user ID. (User.Id) Represents the end user of the application. When telemetry is sent from a service, the user context is about the user that initiated the operation in the service.
[Sampling](./sampling.md) is one of the techniques to minimize the amount of collected telemetry. Sampling algorithm attempts to either sample in or out all the correlated telemetry. Anonymous user ID is used for sampling score generation. So anonymous user ID should be a random enough value. > [!NOTE]
-> The count of anonymous user IDs is not the same as the number of unique application users. The count of anonymous user IDs is typically higher because each time the user opens your app on a different device or browser, or cleans up browser cookies, a new unique anonymous user id is allocated. This may result in counting the same physical users multiple times.
+> The count of anonymous user IDs is not the same as the number of unique application users. The count of anonymous user IDs is typically higher because each time the user opens your app on a different device or browser, or cleans up browser cookies, a new unique anonymous user id is allocated. This calculation may result in counting the same physical users multiple times.
User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration.
Max length: 128
## Authenticated user ID
-Authenticated user ID. The opposite of anonymous user ID, this field represents the user with a friendly name. This is only collected by default with the ASP.NET Framework SDK's [`AuthenticatedUserIdTelemetryInitializer`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/WEB/Src/Web/Web/AuthenticatedUserIdTelemetryInitializer.cs).
+Authenticated user ID. The opposite of anonymous user ID, this field represents the user with a friendly name. This ID is only collected by default with the ASP.NET Framework SDK's [`AuthenticatedUserIdTelemetryInitializer`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/WEB/Src/Web/Web/AuthenticatedUserIdTelemetryInitializer.cs).
-When users authenticate in your app, you can use the Application Insights SDK to initialize the Authenticated User ID with a value that identifies the user in a persistent manner across browser and devices, all telemetry items are then attributed to that unique ID. This enables querying for all telemetry collected for a specific user (subject to [sampling configurations](./sampling.md) and [telemetry filtering](./api-filtering-sampling.md)).
+Use the Application Insights SDK to initialize the Authenticated User ID with a value identifying the user persistently across browsers and devices. In this way, all telemetry items are attributed to that unique ID. This ID enables querying for all telemetry collected for a specific user (subject to [sampling configurations](./sampling.md) and [telemetry filtering](./api-filtering-sampling.md)).
User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration.
Max length: 1024
## Account ID
-In multi-tenant applications this is the tenant account ID or name, which the user is acting with. It is used for additional user segmentation when user ID and authenticated user ID are not sufficient. For example, a subscription ID for Azure portal or the blog name for a blogging platform.
+The account ID, in multi-tenant applications, is the tenant account ID or name that the user is acting with. It's used for more user segmentation when user ID and authenticated user ID aren't sufficient. For example, a subscription ID for Azure portal or the blog name for a blogging platform.
Max length: 1024 ## Cloud role
-Name of the role the application is a part of. Maps directly to the role name in azure. Can also be used to distinguish micro services, which are part of a single application.
+Name of the role the application is a part of. Maps directly to the role name in Azure. Can also be used to distinguish micro services, which are part of a single application.
Max length: 256
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
The telemetry types are:
* Browser telemetry: Application Insights collects the sender's IP address. The ingestion endpoint calculates the IP address. * Server telemetry: The Application Insights telemetry module temporarily collects the client IP address. The IP address isn't collected locally when the `X-Forwarded-For` header is set. When the incoming list of IP address has more than one item, the last IP address is used to populate geolocation fields.
-This behavior is by design to help avoid unnecessary collection of personal data. Whenever possible, we recommend avoiding the collection of personal data.
+This behavior is by design to help avoid unnecessary collection of personal data and ip address location information. Whenever possible, we recommend avoiding the collection of personal data.
> [!NOTE]
-> Although the default is to not collect IP addresses, you can override this behavior. We recommend verifying that the collection doesn't break any compliance requirements or local regulations.
+> Although the default is to not collect IP addresses, you can override this behavior. We recommend verifying that the collection doesn't break any compliance requirements or local regulations.
> > To learn more about handling personal data in Application Insights, consult the [guidance for personal data](../logs/personal-data-mgmt.md).
+While not collecting ip addresses will also not collect city and other geolocation attributes are populated by our pipeline by using the IP address, you can also mask IP collection at the source. This can be done by either removing the client IP initializer [Configuration with Applications Insights Configuration](configuration-with-applicationinsights-config.md), or providing your own custom initializer. For more information, see [API Filtering example.](api-filtering-sampling.md).
+ ## Storage of IP address data
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.2.11.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.11/applicationinsights-agent-3.2.11.jar) file.
+Download the [applicationinsights-agent-3.3.0.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.3.0/applicationinsights-agent-3.3.0.jar) file.
> [!WARNING] >
-> If you're upgrading from 3.2.x to 3.3.0-BETA:
+> If you're upgrading from 3.2.x to 3.3.0:
>
-> - Starting from 3.3.0-BETA, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logginglevel)
+> - Starting from 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logginglevel)
> > If you're upgrading from 3.1.x: >
Download the [applicationinsights-agent-3.2.11.jar](https://github.com/microsoft
#### Point the JVM to the jar file
-Add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to your application's JVM args.
+Add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to your applicatio
APPLICATIONINSIGHTS_CONNECTION_STRING = <Copy connection string from Application Insights Resource Overview> ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.11.jar` with the following content:
+ - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.3.0.jar` with the following content:
```json {
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Configure [App Services](../../app-service/configure-language-java.md#set-java-r
## Spring Boot
-Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` somewhere before `-jar`, for example:
```
-java -javaagent:path/to/applicationinsights-agent-3.2.11.jar -jar <myapp.jar>
+java -javaagent:path/to/applicationinsights-agent-3.3.0.jar -jar <myapp.jar>
``` ## Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.11.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.3.0.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.11.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.3.0.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.11.jar -jar <myapp.jar>
+ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.3.0.jar -jar <myapp.jar>
``` ## Tomcat 8 (Linux)
ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.11.jar -jar <my
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.11.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.3.0.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.11.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.11.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.3.0.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.11.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.3.0.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.11.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.3.0.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.11.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.3.0.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to the existing `j
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.2.11.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.3.0.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.2.11.jar
+-javaagent:path/to/applicationinsights-agent-3.3.0.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.2.11.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.3.0.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.2.11.jar>
+ -javaagent:path/to/applicationinsights-agent-3.3.0.jar>
</jvm-options> ... </java-config>
Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following: ```--javaagent:path/to/applicationinsights-agent-3.2.11.jar
+-javaagent:path/to/applicationinsights-agent-3.3.0.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.2.11.jar
+-javaagent:path/to/applicationinsights-agent-3.3.0.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.11.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.3.0.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.11.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.3.0.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
You can also set the connection string using the environment variable `APPLICATI
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.11.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.3.0.jar` is located.
```json {
These are the valid `level` values that you can specify in the `applicationinsig
### LoggingLevel
-Starting from version 3.3.0-BETA, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is aleady captured in the `SeverityLevel` field.
+Starting from version 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is aleady captured in the `SeverityLevel` field.
If needed, you can re-enable the previous behavior:
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## HTTP headers
-Starting from 3.2.11, you can capture request and response headers on your server (request) telemetry:
+Starting from 3.3.0, you can capture request and response headers on your server (request) telemetry:
```json {
Again, the header names are case-insensitive, and the examples above will be cap
By default, http server requests that result in 4xx response codes are captured as errors.
-Starting from version 3.2.11, you can change this behavior to capture them as success if you prefer:
+Starting from version 3.3.0, you can change this behavior to capture them as success if you prefer:
```json {
Starting from version 3.2.0, the following preview instrumentations can be enabl
``` > [!NOTE] > Akka instrumentation is available starting from version 3.2.2
-> Vertx HTTP Library instrumentation is available starting from version 3.2.11
+> Vertx HTTP Library instrumentation is available starting from version 3.3.0
## Metric interval
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.2.11.jar` is located.
+`applicationinsights-agent-3.3.0.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over.
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-troubleshoot.md
In this article, we cover some of the common issues that you might face while in
## Check the self-diagnostic log file By default, Application Insights Java 3.x produces a log file named `applicationinsights.log` in the same directory
-that holds the `applicationinsights-agent-3.2.11.jar` file.
+that holds the `applicationinsights-agent-3.3.0.jar` file.
This log file is the first place to check for hints to any issues you might be experiencing. If no log file is generated, check that your Java application has write permission to the directory that holds the
-`applicationinsights-agent-3.2.11.jar` file.
+`applicationinsights-agent-3.3.0.jar` file.
If still no log file is generated, check the stdout log from your Java application. Application Insights Java 3.x should log any errors to stdout that would prevent it from logging to its normal location.
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
Each exporter accepts the same arguments for configuration, passed through the c
- `instrumentation_key`: The instrumentation key used to connect to your Azure Monitor resource. - `logging_sampling_rate`: Used for `AzureLogHandler`. Provides a sampling rate [0,1.0] for exporting logs. Defaults to 1.0. - `max_batch_size`: Specifies the maximum size of telemetry that's exported at once.-- `proxies`: Specifies a sequence of proxies to use for sending data to Azure Monitor. For more information, see [proxies](https://requests.readthedocs.io/en/master/user/advanced/#proxies).
+- `proxies`: Specifies a sequence of proxies to use for sending data to Azure Monitor. For more information, see [proxies](https://requests.readthedocs.io/en/latest/user/advanced/#proxies).
- `storage_path`: A path to where the local storage folder exists (unsent telemetry). As of `opencensus-ext-azure` v1.0.3, the default path is the OS temp directory + `opencensus-python` + `your-ikey`. Prior to v1.0.3, the default path is $USER + `.opencensus` + `.azure` + `python-file-name`. ## Integrate with Azure Functions
azure-monitor Metrics Charts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-charts.md
Title: Advanced features of the Azure metrics explorer
-description: Learn about advanced uses of the Azure metrics explorer.
+ Title: Advanced features of Metrics Explorer
+description: Metrics are a series of measured values and counts that Azure collects. Learn to use Metrics Explorer to investigate the health and usage of resources.
Previously updated : 02/21/2022 Last updated : 06/09/2022 +
-# Advanced features of the Azure metrics explorer
+# Advanced features of Metrics Explorer in Azure Monitor
> [!NOTE]
-> This article assumes you're familiar with basic features of the Azure metrics explorer feature of Azure Monitor. If you're a new user and want to learn how to create your first metric chart, see [Getting started with the metrics explorer](./metrics-getting-started.md).
+> This article assumes you're familiar with basic features of the Metrics Explorer feature of Azure Monitor. If you're a new user and want to learn how to create your first metric chart, see [Getting started with the Metrics Explorer](./metrics-getting-started.md).
In Azure Monitor, [metrics](data-platform-metrics.md) are a series of measured values and counts that are collected and stored over time. Metrics can be standard (also called "platform") or custom. Standard metrics are provided by the Azure platform. They reflect the health and usage statistics of your Azure resources. ## Resource scope picker+ The resource scope picker allows you to view metrics across single resources and multiple resources. The following sections explain how to use the resource scope picker. ### Select a single resource
-Select **Metrics** from the **Azure Monitor** menu or from the **Monitoring** section of a resource's menu. Then choose **Select a scope** to open the scope picker.
-Use the scope picker to select the resources whose metrics you want to see. The scope should be populated if you opened the Azure metrics explorer from a resource's menu.
+In the Azure portal, select **Metrics** from the **Monitor** menu or from the **Monitoring** section of a resource's menu. Then choose **Select a scope** to open the scope picker.
+
+Use the scope picker to select the resources whose metrics you want to see. If you opened the Azure Metrics Explorer from a resource's menu, the scope should be populated.
![Screenshot showing how to open the resource scope picker.](./media/metrics-charts/scope-picker.png)
After selecting a resource, you see all subscriptions and resource groups that c
When you're satisfied with your selection, select **Apply**.
-### View metrics across multiple resources
+### Select multiple resources
+ Some resource types can query for metrics over multiple resources. The resources must be within the same subscription and location. Find these resource types at the top of the **Resource types** menu. For more information, see [Select multiple resources](./metrics-dynamic-scope.md#select-multiple-resources).
For types that are compatible with multiple resources, you can query for metrics
## Multiple metric lines and charts
-In the Azure metrics explorer, you can create charts that plot multiple metric lines or show multiple metric charts at the same time. This functionality allows you to:
+In the Azure Metrics Explorer, you can create charts that plot multiple metric lines or show multiple metric charts at the same time. This functionality allows you to:
- Correlate related metrics on the same graph to see how one value relates to another. - Display metrics that use different units of measure in close proximity. - Visually aggregate and compare metrics from multiple resources.
-For example, imagine you have five storage accounts, and you want to know how much space they consume together. You can create a (stacked) area chart that shows the individual values and the sum of all the values at particular points in time.
+For example, imagine you have five storage accounts, and you want to know how much space they consume together. You can create a stacked area chart that shows the individual values and the sum of all the values at points in time.
### Multiple metrics on the same chart
To view multiple metrics on the same chart, first [create a new chart](./metrics
> [!NOTE] > Typically, your charts shouldn't mix metrics that use different units of measure. For example, avoid mixing one metric that uses milliseconds with another that uses kilobytes. Also avoid mixing metrics whose scales differ significantly. >
-> In these cases, consider using multiple charts instead. In the metrics explorer, select **New chart** to create a new chart.
+> In these cases, consider using multiple charts instead. In Metrics Explorer, select **New chart** to create a new chart.
![Screenshot showing multiple metrics.](./media/metrics-charts/multiple-metrics-chart.png)
To reorder or delete multiple charts, select the ellipsis (**...**) button to op
## Time range controls In addition to changing the time range using the [time picker panel](metrics-getting-started.md#select-a-time-range), you can also pan and zoom using the controls in the chart area.+ ### Pan
-To pan, click on the left and right arrows at the edge of the chart. This will move the selected time range back and forward by one half the chart's time span. For example, if you're viewing the past 24 hours, clicking on the left arrow will cause the time range to shift to span a day and a half to 12 hours ago.
+To pan, select the left and right arrows at the edge of the chart. The arrow control moves the selected time range back and forward by one half the chart's time span. For example, if you're viewing the past 24 hours, clicking on the left arrow causes the time range to shift to span a day and a half to 12 hours ago.
Most metrics support 93 days of retention but only let you view 30 days at a time. Using the pan controls, you look at the past 30 days and then easily walk back 15 days at a time to view the rest of the retention period.
Most metrics support 93 days of retention but only let you view 30 days at a tim
### Zoom
-You can click and drag on the chart to zoom into a section of a chart. Zooming will update the chart's time range to span your selection and will select a smaller time grain if the time grain is set to "Automatic". The new time range will apply to all charts in Metrics.
+You can click and drag on the chart to zoom into a section of a chart. Zooming updates the chart's time range to span your selection. If the time grain is set to Automatic, zooming selects a smaller time grain. The new time range applies to all charts in Metrics.
![Animated gif showing the metrics zoom feature.](./media/metrics-charts/metrics-zoom-control.gif) ## Aggregation
-When you add a metric to a chart, the metrics explorer automatically applies a default aggregation. The default makes sense in basic scenarios. But you can use a different aggregation to gain more insights about the metric.
+When you add a metric to a chart, Metrics Explorer applies a default aggregation. The default makes sense in basic scenarios. But you can use a different aggregation to gain more insights about the metric.
-Before you use different aggregations on a chart, you should understand how the metrics explorer handles them. Metrics are a series of measurements (or "metric values") that are captured over a time period. When you plot a chart, the values of the selected metric are separately aggregated over the *time grain*.
+Before you use different aggregations on a chart, you should understand how Metrics Explorer handles them. Metrics are a series of measurements (or "metric values") that are captured over a time period. When you plot a chart, the values of the selected metric are separately aggregated over the *time grain*.
-You select the size of the time grain by using the metrics explorer's [time picker panel](./metrics-getting-started.md#select-a-time-range). If you don't explicitly select the time grain, the currently selected time range is used by default. After the time grain is determined, the metric values that were captured during each time grain are aggregated on the chart, one data point per time grain.
+You select the size of the time grain by using Metrics Explorer's [time picker panel](./metrics-getting-started.md#select-a-time-range). If you don't explicitly select the time grain, the currently selected time range is used by default. After the time grain is determined, the metric values that were captured during each time grain are aggregated on the chart, one data point per time grain.
For example, suppose a chart shows the *Server response time* metric. It uses the *average* aggregation over time span of the *last 24 hours*. In this example: -- If the time granularity is set to 30 minutes, the chart is drawn from 48 aggregated data points. That is, the line chart connects 48 dots in the chart plot area (24 hours x 2 data points per hour). Each data point represents the *average* of all captured response times for server requests that occurred during each of the relevant 30-minute time periods.
+- If the time granularity is set to 30 minutes, the chart is drawn from 48 aggregated data points. The line chart connects 48 dots in the chart plot area (24 hours x 2 data points per hour). Each data point represents the *average* of all captured response times for server requests that occurred during each of the relevant 30-minute time periods.
- If you switch the time granularity to 15 minutes, you get 96 aggregated data points. That is, you get 24 hours x 4 data points per hour.
-The metrics explorer has five basic statistical aggregation types: sum, count, min, max, and average. The *sum* aggregation is sometimes called the *total* aggregation. For many metrics, the metrics explorer hides the aggregations that are irrelevant and can't be used.
+Metrics Explorer has five basic statistical aggregation types: sum, count, min, max, and average. The *sum* aggregation is sometimes called the *total* aggregation. For many metrics, Metrics Explorer hides the aggregations that are irrelevant and can't be used.
For a deeper discussion of how metric aggregation works, see [Azure Monitor metrics aggregation and display explained](metrics-aggregation-explained.md).
-* **Sum**: The sum of all values captured during the aggregation interval.
+- **Sum**: The sum of all values captured during the aggregation interval.
+
+ ![Screenshot of a sum request.](./media/metrics-charts/request-sum.png)
- ![Screenshot of a sum request.](./media/metrics-charts/request-sum.png)
+- **Count**: The number of measurements captured during the aggregation interval.
-* **Count**: The number of measurements captured during the aggregation interval.
-
- When the metric is always captured with the value of 1, the count aggregation is equal to the sum aggregation. This scenario is common when the metric tracks the count of distinct events and each measurement represents one event. The code emits a metric record every time a new request arrives.
+ When the metric is always captured with the value of 1, the count aggregation is equal to the sum aggregation. This scenario is common when the metric tracks the count of distinct events and each measurement represents one event. The code emits a metric record every time a new request arrives.
- ![Screenshot of a count request.](./media/metrics-charts/request-count.png)
+ ![Screenshot of a count request.](./media/metrics-charts/request-count.png)
-* **Average**: The average of the metric values captured during the aggregation interval.
+- **Average**: The average of the metric values captured during the aggregation interval.
- ![Screenshot of an average request.](./media/metrics-charts/request-avg.png)
+ ![Screenshot of an average request.](./media/metrics-charts/request-avg.png)
-* **Min**: The smallest value captured during the aggregation interval.
+- **Min**: The smallest value captured during the aggregation interval.
- ![Screenshot of a minimum request.](./media/metrics-charts/request-min.png)
+ ![Screenshot of a minimum request.](./media/metrics-charts/request-min.png)
-* **Max**: The largest value captured during the aggregation interval.
+- **Max**: The largest value captured during the aggregation interval.
- ![Screenshot of a maximum request.](./media/metrics-charts/request-max.png)
+ ![Screenshot of a maximum request.](./media/metrics-charts/request-max.png)
## Filters
-You can apply filters to charts whose metrics have dimensions. For example, imagine a "Transaction count" metric that has a "Response type" dimension. This dimension indicates whether the response from transactions succeeded or failed. If you filter on this dimension, you'll see a chart line for only successful (or only failed) transactions.
+You can apply filters to charts whose metrics have dimensions. For example, imagine a *Transaction count* metric that has a *Response type* dimension. This dimension indicates whether the response from transactions succeeded or failed. If you filter on this dimension, you'll see a chart line for only successful or only failed transactions.
### Add a filter 1. Above the chart, select **Add filter**.
-2. Select a dimension (property) to filter.
+1. Select a dimension (property) to filter.
![Screenshot that shows the dimensions (properties) you can filter.](./media/metrics-charts/028.png)
-3. Select the operator you want to apply against the dimension (property). The default operator is = (equals)
+1. Select the operator you want to apply against the dimension (property). The default operator is = (equals)
![Screenshot that shows the operator you can use with the filter.](./media/metrics-charts/filter-operator.png)
-4. Select which dimension values you want to apply to the filter when plotting the chart (this example shows filtering out the successful storage transactions):
+1. Select which dimension values you want to apply to the filter when plotting the chart. This example shows filtering out the successful storage transactions.
![Screenshot that shows the successful filtered storage transactions.](./media/metrics-charts/029.png)
-5. After selecting the filter values, click away from the Filter Selector to close it. Now the chart shows how many storage transactions have failed:
+1. After selecting the filter values, click away from the filter selector to close it. Now the chart shows how many storage transactions have failed:
![Screenshot that shows how many storage transactions have failed.](./media/metrics-charts/030.png)
-6. You can repeat steps 1-5 to apply multiple filters to the same charts.
-
+1. Repeat these steps to apply multiple filters to the same charts.
## Metric splitting
You can split a metric by dimension to visualize how different segments of the m
### Apply splitting 1. Above the chart, select **Apply splitting**.
-
+ > [!NOTE] > Charts that have multiple metrics can't use the splitting functionality. Also, although a chart can have multiple filters, it can have only one splitting dimension.
-2. Choose a dimension on which to segment your chart:
+1. Choose a dimension on which to segment your chart:
![Screenshot that shows the selected dimension on which to segment the chart.](./media/metrics-charts/031.png) The chart now shows multiple lines, one for each dimension segment: ![Screenshot that shows multiple lines, one for each segment of dimension.](./media/metrics-charts/segment-dimension.png)
-
-3. Choose a limit on the number of values to be displayed after splitting by selected dimension. The default limit is 10 as shown in the above chart. The range of limit is 1 - 50.
-
+
+1. Choose a limit on the number of values to be displayed after splitting by selected dimension. The default limit is 10 as shown in the above chart. The range of limit is 1 - 50.
+ ![Screenshot that shows split limit, which restricts the number of values after splitting.](./media/metrics-charts/segment-dimension-limit.png)
-
-4. Choose the sort order on segments: Ascending or Descending. The default selection is descending.
-
+
+1. Choose the sort order on segments: **Ascending** or **Descending**. The default selection is **Descending**.
+ ![Screenshot that shows sort order on split values.](./media/metrics-charts/segment-dimension-sort.png)
-5. Click away from the **Grouping Selector** to close it.
-
+1. Click away from the grouping selector to close it.
> [!NOTE] > To hide segments that are irrelevant for your scenario and to make your charts easier to read, use both filtering and splitting on the same dimension. ## Locking the range of the y-axis
-Locking the range of the value (y) axis becomes important in charts that show small fluctuations of large values.
+Locking the range of the value (y) axis becomes important in charts that show small fluctuations of large values.
-For example, a drop in the volume of successful requests from 99.99 percent to 99.5 percent might represent a significant reduction in the quality of service. But noticing a small numeric value fluctuation would be difficult or even impossible if you're using the default chart settings. In this case, you could lock the lowest boundary of the chart to 99 percent to make a small drop more apparent.
+For example, a drop in the volume of successful requests from 99.99 percent to 99.5 percent might represent a significant reduction in the quality of service. Noticing a small numeric value fluctuation would be difficult or even impossible if you're using the default chart settings. In this case, you could lock the lowest boundary of the chart to 99 percent to make a small drop more apparent.
-Another example is a fluctuation in the available memory. In this scenario, the value will technically never reach 0. Fixing the range to a higher value might make drops in available memory easier to spot.
+Another example is a fluctuation in the available memory. In this scenario, the value technically never reaches 0. Fixing the range to a higher value might make drops in available memory easier to spot.
To control the y-axis range, open the chart menu (**...**). Then select **Chart settings** to access advanced chart settings. ![Screenshot that highlights the chart settings selection.](./media/metrics-charts/033.png) Modify the values in the **Y-axis range** section, or select **Auto** to revert to the default values.
-
+ ![Screenshot that highlights the Y-axis range section.](./media/metrics-charts/034.png) > [!WARNING]
Modify the values in the **Y-axis range** section, or select **Auto** to revert
After you configure the charts, the chart lines are automatically assigned a color from a default palette. You can change those colors.
-To change the color of a chart line, select the colored bar in the legend that corresponds to the chart. The color picker dialog box opens. Use the color picker to configure the line color.
+To change the color of a chart line, select the colored bar in the legend that corresponds to the chart. The color picker dialog opens. Use the color picker to configure the line color.
![Screenshot that shows how to change color.](./media/metrics-charts/035.png)
Your customized colors are preserved when you pin the chart to a dashboard. The
## Saving to dashboards or workbooks
-After you configure a chart, you might want to add it to a dashboard or workbook. By adding a chart to a dashboard or workbook, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring telemetry.
+After you configure a chart, you might want to add it to a dashboard or workbook. By adding a chart to a dashboard or workbook, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring information.
- To pin a configured chart to a dashboard, in the upper-right corner of the chart, select **Save to dashboard** and then **Pin to dashboard**. - To save a configured chart to a workbook, in the upper-right corner of the chart, select **Save to dashboard** and then **Save to workbook**.
After you configure a chart, you might want to add it to a dashboard or workbook
## Alert rules
-You can use your visualization criteria to create a metric-based alert rule. The new alert rule will include your chart's target resource, metric, splitting, and filter dimensions. You can modify these settings by using the alert rule creation pane.
+You can use your visualization criteria to create a metric-based alert rule. The new alert rule includes your chart's target resource, metric, splitting, and filter dimensions. You can modify these settings by using the alert rule creation pane.
To begin, select **New alert rule**.
The alert rule creation pane opens. In the pane, you see the chart's metric dime
For more information, see [Create, view, and manage metric alerts](../alerts/alerts-metric.md). ## Correlate metrics to logs
-To help customer diagnose the root cause of anomalies in their metrics chart, we created Drill into Logs. Drill into Logs allows customers to correlate spikes in their metrics chart to logs and queries.
-Before we dive into the experience, we want to first introduce the different types of logs and queries provided.
+To help customers diagnose the root cause of anomalies in their metrics chart, we created the *Drill into Logs* feature. Drill into Logs allows customers to correlate spikes in their metrics chart to logs and queries.
+
+This table summarizes the types of logs and queries provided:
| Term | Definition | ||-|
-| Activity logs | Provides insight into the operations on each Azure resource in the subscription from the outside (the management plane) in addition to updates on Service Health events. Use the Activity Log, to determine the what, who, and when for any write operations (PUT, POST, DELETE) taken on the resources in your subscription. There is a single Activity log for each Azure subscription. |
-| Diagnostic log | Provide insight into operations that were performed within an Azure resource (the data plane), for example getting a secret from a Key Vault or making a request to a database. The content of resource logs varies by the Azure service and resource type. **Note:** Must be provided by service and enabled by customer |
-| Recommended log | Scenario-based queries that customer can leverage to investigate anomalies in their metrics explorer. |
-
-Currently, Drill into Logs are available for select resource providers. The resource providers that have the complete Drill into Logs experience are:
-
-* Application Insights
-* Autoscale
-* App Services
-* StorageΓÇ»
-
-Below is a sample experiences for the Application Insights resource provider.
-
-![Spike in failures in app insights metrics blade](./media/metrics-charts/drill-into-log-ai.png)
-
-To diagnose the spike in failed requests, click on ΓÇ£Drill into LogsΓÇ¥.
-
-![Screenshot of drill into logs dropdown](./media/metrics-charts/drill-into-logs-dropdown.png)
-
-By clicking on the failure option, you will be led to a custom failure blade that provides you with the failed operation operations, top exceptions types, and dependencies.
+| Activity logs | Provides insight into the operations on each Azure resource in the subscription from the outside (the management plane) in addition to updates on Service Health events. Use the Activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE) taken on the resources in your subscription. There's a single Activity log for each Azure subscription. |
+| Diagnostic log | Provides insight into operations that were performed within an Azure resource (the data plane), for example getting a secret from a Key Vault or making a request to a database. The content of resource logs varies by the Azure service and resource type. **Note:** Must be provided by service and enabled by customer. |
+| Recommended log | Scenario-based queries that customer can use to investigate anomalies in their Metrics Explorer. |
-![Screenshot of app insights failure blade](./media/metrics-charts/ai-failure-blade.png)
+Currently, Drill into Logs is available for select resource providers. The resource providers that have the complete Drill into Logs experience are:
-### Common problems with Drill into Logs
+- Application Insights
+- Autoscale
+- App Services
+- Storage
-* Log and queries are disabled - To view recommended logs and queries, you must route your diagnostic logs to Log Analytics. Read [this document](./diagnostic-settings.md) to learn how to do this.
-* Activity logs are only provided - The Drill into Logs feature is only available for select resource providers. By default, activity logs are provided.
+This screenshot shows a sample for the Application Insights resource provider.
-
-## Troubleshooting
+![Screenshot shows a spike in failures in app insights metrics pane.](./media/metrics-charts/drill-into-log-ai.png)
-If you don't see any data on your chart, review the following troubleshooting information:
+1. To diagnose the spike in failed requests, select **Drill into Logs**.
-* Filters apply to all of the charts on the pane. While you focus on a chart, make sure that you don't set a filter that excludes all the data on another chart.
+ ![Screenshot shows the Drill into Logs dropdown menu.](./media/metrics-charts/drill-into-logs-dropdown.png)
-* To set different filters on different charts, create the charts in different blades. Then save the charts as separate favorites. If you want, you can pin the charts to the dashboard so you can see them together.
+1. Select **Failures** to open a custom failure pane that provides you with the failed operations, top exceptions types, and dependencies.
-* If you segment a chart by a property that the metric doesn't define, the chart displays no content. Try clearing the segmentation (splitting), or choose a different property.
+ ![Screenshot of app insights failure pane.](./media/metrics-charts/ai-failure-blade.png)
## Next steps
azure-monitor Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-troubleshoot.md
description: Troubleshoot the issues with creating, customizing, or interpreting
- Previously updated : 04/23/2019+ Last updated : 06/09/2022
Collection of **Guest (classic)** metrics requires configuring the Azure Diagnos
**Solution:** If Azure Diagnostics Extension is enabled but you are still unable to see your metrics, follow steps outlined in [Azure Diagnostics Extension troubleshooting guide](../agents/diagnostics-extension-troubleshooting.md#metric-data-doesnt-appear-in-the-azure-portal). See also the troubleshooting steps for [Cannot pick Guest (classic) namespace and metrics](#cannot-pick-guest-namespace-and-metrics)
+### Chart is segmented by a property that the metric doesn't define
+
+If you segment a chart by a property that the metric doesn't define, the chart displays no content.
+
+**Solution:** Clear the segmentation (splitting), or choose a different property.
+
+### Filter on another chart excludes all data
+
+Filters apply to all of the charts on the pane. If you set a filter on another chart, it could exclude all data from the current chart.
+
+**Solution:** Check the filters for all the charts on the pane. If you want different filters on different charts, create the charts in different panes. Save the charts as separate favorites. If you want, you can pin the charts to the dashboard so you can see them together.
+ ## ΓÇ£Error retrieving dataΓÇ¥ message on dashboard This problem may happen when your dashboard was created with a metric that was later deprecated and removed from Azure. To verify that it is the case, open the **Metrics** tab of your resource, and check the available metrics in the metric picker. If the metric is not shown, the metric has been removed from Azure. Usually, when a metric is deprecated, there is a better new metric that provides with a similar perspective on the resource health.
By default, Guest (classic) metrics are stored in Azure Storage account, which y
1. Use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to validate that metrics are flowing into the storage account. If metrics aren't collected, follow the [Azure Diagnostics Extension troubleshooting guide](../agents/diagnostics-extension-troubleshooting.md#metric-data-doesnt-appear-in-the-azure-portal).
+## Log and queries are disabled for Drill into Logs
+
+To view recommended logs and queries, you must route your diagnostic logs to Log Analytics.
+
+**Solution:** To route your diagnostic logs to Log Analytics, see [Diagnostic settings in Azure Monitor](./diagnostic-settings.md).
+
+## Only the Activity logs appear in Drill into Logs
+
+The Drill into Logs feature is only available for select resource providers. By default, activity logs are provided.
+
+**Solution:** This behavior is expected for some resource providers.
+ ## Next steps * [Learn about getting started with Metric Explorer](metrics-getting-started.md)
azure-monitor Wire Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/wire-data.md
VMConnection
### More examples queries
-Refer to the [VM insights log search documentation](../vm/vminsights-log-search.md) and the [VM insights alert documentation](../vm/monitor-virtual-machine-alerts.md) for additional example queries.
+Refer to the [VM insights log search documentation](/azure/azure-monitor/vm/vminsights-log-query) and the [VM insights alert documentation](../vm/monitor-virtual-machine-alerts.md) for additional example queries.
## Uninstall Wire Data 2.0 Solution
azure-monitor Log Analytics Workspace Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-insights-overview.md
Title: Log Analytics Workspace Insights
description: An overview of Log Analytics Workspace Insights - ingestion, usage, health, agents and more -- Previously updated : 05/06/2021+++ Last updated : 06/27/2021
azure-netapp-files Azure Netapp Files Mount Unmount Volumes For Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md
Previously updated : 01/07/2022 Last updated : 06/13/2022 # Mount a volume for Windows or Linux VMs
You can mount an Azure NetApp Files file for Windows or Linux virtual machines (
4. If you want to mount the volume to Windows using NFS:
- a. Mount the volume onto a Unix or Linux VM first.
- b. Run a `chmod 777` or `chmod 775` command against the volume.
- c. Mount the volume via the NFS client on Windows.
+ > [!NOTE]
+ > One alternative to mounting an NFS volume on Windows is to [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md), allowing the native access of SMB for Windows and NFS for Linux. However, if that is not possible, you can mount the NFS volume on Windows using the steps below.
+
+ * Set the permissions to allow the volume to be mounted on Windows
+ * Follow the steps to [Configure Unix permissions and change ownership mode for NFS and dual-protocol volumes](configure-unix-permissions-change-ownership-mode.md#unix-permissions) and set the permissions to '777' or '775'.
+ * Install NFS client on Windows
+ * Open PowerShell
+ * type: `Install-WindowsFeature -Name NFS-Client`
+ * Mount the volume via the NFS client on Windows
+ * Obtain the 'mount path' of the volume
+ * Open a Command prompt
+ * type: `mount -o anon -o mtype=hard \\$ANFIP\$FILEPATH $DRIVELETTER:\`
+ * `$ANFIP` is the IP address of the Azure NetApp Files volume found in the volume properties blade.
+ * `$FILEPATH` is the export path of the Azure NetApp Files volume.
+ * `$DRIVELETTER` is the drive letter where you would like the volume mounted within Windows.
5. If you want to mount an NFS Kerberos volume, see [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) for additional details.
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md
na Previously updated : 03/17/2022 Last updated : 06/27/2022
Azure NetApp Files volume replication is supported between various [Azure region
| Germany/France | Germany West Central | France Central | | North America | East US | East US 2 | | North America | East US 2| West US 2 |
+| North America | North Central US | East US 2|
| North America | South Central US | East US | | North America | South Central US | East US 2 | | North America | South Central US | Central US |
azure-resource-manager Bicep Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-deployment.md
description: Describes the functions to use in a Bicep file to retrieve deployme
Previously updated : 09/30/2021 Last updated : 06/27/2022 # Deployment functions for Bicep
Returns information about the Azure environment used for deployment.
Namespace: [az](bicep-functions.md#namespaces-for-functions).
+### Remarks
+
+To see a list of registered environments for your account, use [az cloud list](/cli/azure/cloud#az-cloud-list) or [Get-AzEnvironment](/powershell/module/az.accounts/get-azenvironment).
+ ### Return value This function returns properties for the current Azure environment. The following example shows the properties for global Azure. Sovereign clouds may return slightly different properties.
azure-resource-manager Proxy Cache Resource Endpoint Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/proxy-cache-resource-endpoint-reference.md
description: Custom resource cache reference for Azure Custom Resource Providers
Previously updated : 06/20/2019 Last updated : 05/13/2022
-# Custom Resource Cache Reference
+# Custom resource cache reference
-This article will go through the requirements for endpoints implementing cache custom resources. If you are unfamiliar with Azure Custom Resource Providers, see [the overview on custom resource providers](overview.md).
+This article will go through the requirements for endpoints implementing cache custom resources. If you're unfamiliar with Azure Custom Resource Providers, see [the overview on custom resource providers](overview.md).
-## How to define a cache resource endpoint
+## Define a cache resource endpoint
-A proxy resource can be created by specifying the **routingType** to "Proxy, Cache".
+A proxy resource can be created by specifying the `routingType` to "Proxy, Cache".
-Sample custom resource provider:
+**Sample custom resource provider**:
```JSON {
Sample custom resource provider:
} ```
-## Building proxy resource endpoint
+## Build a proxy resource endpoint
-An **endpoint** that implements a "Proxy, Cache" resource **endpoint** must handle the request and response for the new API in Azure. In this case, the **resourceType** will generate a new Azure resource API for `PUT`, `GET`, and `DELETE` to perform CRUD on a single resource, as well as `GET` to retrieve all existing resources:
+An endpoint that implements a "Proxy, Cache" resource endpoint must handle the request and response for the new API in Azure. In this case, the **resourceType** will generate a new Azure resource API for `PUT`, `GET`, and `DELETE` to perform CRUD on a single resource, as well as `GET` to retrieve all existing resources.
> [!NOTE]
-> The Azure API will generate the request methods `PUT`, `GET`, and `DELETE`, but the cache **endpoint** only needs to handle `PUT` and `DELETE`.
-> We recommended that the **endpoint** also implements `GET`.
+> The Azure API will generate the request methods `PUT`, `GET`, and `DELETE`, but the cache endpoint only needs to handle `PUT` and `DELETE`.
+> We recommended that the endpoint also implements `GET`.
### Create a custom resource
-Azure API Incoming Request:
+**Azure API incoming request**:
``` HTTP PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResources/{myCustomResourceName}?api-version=2018-09-01-preview
Content-Type: application/json
} ```
-This request will then be forwarded to the **endpoint** in the form:
+This request will then be forwarded to the endpoint in the form:
``` HTTP PUT https://{endpointURL}/?api-version=2018-09-01-preview
X-MS-CustomProviders-RequestPath: /subscriptions/{subscriptionId}/resourceGroups
} ```
-Similarly, the response from the **endpoint** is then forwarded back to the customer. The response from the endpoint should return:
+The response from the endpoint is then forwarded back to the customer. The response should return:
- A valid JSON object document. All arrays and strings should be nested under a top object. - The `Content-Type` header should be set to "application/json; charset=utf-8". - The custom resource provider will overwrite the `name`, `type`, and `id` fields for the request. - The custom resource provider will only return fields under the `properties` object for a cache endpoint.
-**Endpoint** Response:
+**Endpoint response**:
``` HTTP HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
The `name`, `id`, and `type` fields will automatically be generated for the custom resource by the custom resource provider.
-Azure Custom Resource Provider Response:
+**Azure Custom Resource Provider response**:
``` HTTP HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
### Remove a custom resource
-Azure API Incoming Request:
+**Azure API incoming request**:
``` HTTP Delete https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResources/{myCustomResourceName}?api-version=2018-09-01-preview
Authorization: Bearer eyJ0e...
Content-Type: application/json ```
-This request will then be forwarded to the **endpoint** in the form:
+This request will then be forwarded to the endpoint in the form:
``` HTTP Delete https://{endpointURL}/?api-version=2018-09-01-preview
Content-Type: application/json
X-MS-CustomProviders-RequestPath: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResources/{myCustomResourceName} ```
-Similarly ,the response from the **endpoint** is then forwarded back to the customer. The response from the endpoint should return:
+The response from the endpoint is then forwarded back to the customer. The response should return:
- A valid JSON object document. All arrays and strings should be nested under a top object. - The `Content-Type` header should be set to "application/json; charset=utf-8".-- The Azure Custom Resource Provider will only remove the item from its cache if a 200-level response is returned. Even if the resource does not exist, the **endpoint** should return 204.
+- The Azure Custom Resource Provider will only remove the item from its cache if a 200-level response is returned. Even if the resource doesn't exist, the endpoint should return 204.
-**Endpoint** Response:
+**Endpoint response**:
``` HTTP HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 ```
-Azure Custom Resource Provider Response:
+**Azure Custom Resource Provider response**:
``` HTTP HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
### Retrieve a custom resource
-Azure API Incoming Request:
+**Azure API incoming request**:
``` HTTP GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResources/{myCustomResourceName}?api-version=2018-09-01-preview
Authorization: Bearer eyJ0e...
Content-Type: application/json ```
-The request will **not** be forwarded to the **endpoint**.
+The request will **not** be forwarded to the endpoint.
-Azure Custom Resource Provider Response:
+Azure Custom Resource Provider response:
``` HTTP HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
### Enumerate all custom resources
-Azure API Incoming Request:
+**Azure API incoming request**:
``` HTTP GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResources?api-version=2018-09-01-preview
Authorization: Bearer eyJ0e...
Content-Type: application/json ```
-This request will **not** be forwarded to the **endpoint**.
+The request will **not** be forwarded to the endpoint.
-Azure Custom Resource Provider Response:
+**Azure Custom Resource Provider response**:
``` HTTP HTTP/1.1 200 OK
azure-resource-manager Proxy Resource Endpoint Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/proxy-resource-endpoint-reference.md
description: Custom resource proxy reference for Azure Custom Resource Providers
Previously updated : 06/20/2019 Last updated : 05/13/2022
-# Custom Resource Proxy Reference
+# Custom resource proxy reference
-This article will go through the requirements for endpoints implementing proxy custom resources. If you are unfamiliar with Azure Custom Resource Providers, see [the overview on custom resource providers](overview.md).
+This article will go through the requirements for endpoints implementing proxy custom resources. If you're unfamiliar with Azure Custom Resource Providers, see [the overview on custom resource providers](overview.md).
-## How to define a proxy resource endpoint
+## Define a proxy resource endpoint
-A proxy resource can be created by specifying the **routingType** to "Proxy".
+A proxy resource can be created by specifying the `routingType` to "Proxy".
-Sample custom resource provider:
+### Sample custom resource provider:
```JSON {
Sample custom resource provider:
} ```
-## Building proxy resource endpoint
+## Build a proxy resource endpoint
-An **endpoint** that implements a "Proxy" resource **endpoint** must handle the request and response for the new API in Azure. In this case, the **resourceType** will generate a new Azure resource API for `PUT`, `GET`, and `DELETE` to perform CRUD on a single resource, as well as `GET` to retrieve all existing resources.
+An endpoint that implements a "Proxy" resource endpoint must handle the request and response for the new API in Azure. In this case, the #*resourceType** will generate a new Azure resource API for `PUT`, `GET`, and `DELETE` to perform CRUD on a single resource, as well as `GET` to retrieve all existing resources.
> [!NOTE]
-> The `id`, `name`, and `type` fields are not required, but are needed to integrate the custom resource with existing Azure ecosystem.
+> The `id`, `name`, and `type` fields are not required, but they're needed to integrate the custom resource with an existing Azure ecosystem.
-Sample resource:
+**Sample resource**:
``` JSON {
Sample resource:
} ```
-Parameter reference:
+**Parameter reference**:
Property | Sample | Description ||
id | '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/<br>pro
### Create a custom resource
-Azure API Incoming Request:
+**Azure API incoming request**:
``` HTTP PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resource-provider-name}/myCustomResources/{myCustomResourceName}?api-version=2018-09-01-preview
Content-Type: application/json
} ```
-This request will then be forwarded to the **endpoint** in the form:
+This request will then be forwarded to the endpoint in the form:
``` HTTP PUT https://{endpointURL}/?api-version=2018-09-01-preview
X-MS-CustomProviders-RequestPath: /subscriptions/{subscriptionId}/resourceGroups
} ```
-Similarly, the response from the **endpoint** is then forwarded back to the customer. The response from the endpoint should return:
+The response from the endpoint is then forwarded back to the customer. The response should return:
- A valid JSON object document. All arrays and strings should be nested under a top object. - The `Content-Type` header should be set to "application/json; charset=utf-8".
-**Endpoint** Response:
+**Endpoint response**:
``` HTTP HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
} ```
-Azure Custom Resource Provider Response:
+**Azure Custom Resource Provider response**:
``` HTTP HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
### Remove a custom resource
-Azure API Incoming Request:
+**Azure API incoming request**:
``` HTTP Delete https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResources/{myCustomResourceName}?api-version=2018-09-01-preview
Authorization: Bearer eyJ0e...
Content-Type: application/json ```
-This request will then be forwarded to the **endpoint** in the form:
+This request will then be forwarded to the endpoint in the form:
``` HTTP Delete https://{endpointURL}/?api-version=2018-09-01-preview
Content-Type: application/json
X-MS-CustomProviders-RequestPath: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResources/{myCustomResourceName} ```
-Similarly the response from the **endpoint** is then forwarded back to the customer. The response from the endpoint should return:
+The response from the endpoint is then forwarded back to the customer. The response should return:
- Valid JSON object document. All arrays and strings should be nested under a top object. - The `Content-Type` header should be set to "application/json; charset=utf-8".
-**Endpoint** Response:
+**Endpoint response**:
``` HTTP HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 ```
-Azure Custom Resource Provider Response:
+**Azure Custom Resource Provider response**:
``` HTTP HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
### Retrieve a custom resource
-Azure API Incoming Request:
+**Azure API incoming request**:
``` HTTP GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResources/{myCustomResourceName}?api-version=2018-09-01-preview
Authorization: Bearer eyJ0e...
Content-Type: application/json ```
-This request will then be forwarded to the **endpoint** in the form:
+This request will then be forwarded to the endpoint in the form:
``` HTTP GET https://{endpointURL}/?api-version=2018-09-01-preview
Content-Type: application/json
X-MS-CustomProviders-RequestPath: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResources/{myCustomResourceName} ```
-Similarly, the response from the **endpoint** is then forwarded back to the customer. The response from the endpoint should return:
+The response from the endpoint is then forwarded back to the customer. The response should return:
- A valid JSON object document. All arrays and strings should be nested under a top object. - The `Content-Type` header should be set to "application/json; charset=utf-8".
-**Endpoint** Response:
+**Endpoint response**:
``` HTTP HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
} ```
-Azure Custom Resource Provider Response:
+**Azure Custom Resource Provider response**:
``` HTTP HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
### Enumerate all custom resources
-Azure API Incoming Request:
+**Azure API incoming request**:
``` HTTP GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResources?api-version=2018-09-01-preview
Authorization: Bearer eyJ0e...
Content-Type: application/json ```
-This request will then be forwarded to the **endpoint** in the form:
+This request will then be forwarded to the endpoint in the form:
``` HTTP GET https://{endpointURL}/?api-version=2018-09-01-preview
Content-Type: application/json
X-MS-CustomProviders-RequestPath: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/myCustomResources ```
-Similarly, the response from the **endpoint** is then forwarded back to the customer. The response from the endpoint should return:
+The response from the endpoint is then forwarded back to the customer. The response should return:
- A valid JSON object document. All arrays and strings should be nested under a top object. - The `Content-Type` header should be set to "application/json; charset=utf-8". - The list of resources should be placed under the top-level `value` property.
-**Endpoint** Response:
+**Endpoint response**:
``` HTTP HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
} ```
-Azure Custom Resource Provider Response:
+**Azure Custom Resource Provider response**:
``` HTTP HTTP/1.1 200 OK
azure-resource-manager Reference Custom Providers Csharp Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/reference-custom-providers-csharp-endpoint.md
Previously updated : 01/14/2021 Last updated : 05/15/2022 # Custom provider C# RESTful endpoint reference
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 05/04/2022 Last updated : 06/27/2022 # Move operation support for resources
Jump to a resource provider namespace:
> - [Microsoft.AlertsManagement](#microsoftalertsmanagement) > - [Microsoft.AnalysisServices](#microsoftanalysisservices) > - [Microsoft.ApiManagement](#microsoftapimanagement)
+> - [Microsoft.App](#microsoftapp)
> - [Microsoft.AppConfiguration](#microsoftappconfiguration) > - [Microsoft.AppPlatform](#microsoftappplatform) > - [Microsoft.AppService](#microsoftappservice)
Jump to a resource provider namespace:
> | reportfeedback | No | No | No | > | service | Yes | Yes | Yes (using template) <br/><br/> [Move API Management across regions](../../api-management/api-management-howto-migrate.md). |
+## Microsoft.App
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Resource group | Subscription | Region move |
+> | - | -- | - | -- |
+> | managedenvironments | Yes | Yes | No |
+ ## Microsoft.AppConfiguration > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | services | No | No | No | > | services / projects | No | No | No | > | slots | No | No | No |
+> | sqlmigrationservices | No | No | No |
## Microsoft.DataProtection
Jump to a resource provider namespace:
> | - | -- | - | -- | > | hypervsites | No | No | No | > | importsites | No | No | No |
+> | mastersites | No | No | No |
> | serversites | No | No | No | > | vmwaresites | No | No | No |
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 06/03/2022 Last updated : 06/27/2022 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.AlertsManagement
-* prometheusRuleGroups
* smartDetectorAlertRules ## Microsoft.Automation
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 06/03/2022 Last updated : 06/27/2022 # Tag support for Azure resources
To get the same data as a file of comma-separated values, download [tag-support.
> | alerts | No | No | > | alertsMetaData | No | No | > | migrateFromSmartDetection | No | No |
-> | prometheusRuleGroups | Yes | Yes |
> | smartDetectorAlertRules | Yes | Yes | > | smartGroups | No | No |
azure-resource-manager Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/data-types.md
description: Describes the data types that are available in Azure Resource Manag
Previously updated : 04/27/2022 Last updated : 06/27/2022 # Data types in ARM templates
Objects start with a left brace (`{`) and end with a right brace (`}`). Each pro
} ```
+You can get a property from an object with dot notation.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "exampleObject": {
+ "type": "object",
+ "defaultValue": {
+ "name": "test name",
+ "id": "123-abc",
+ "isCurrent": true,
+ "tier": 1
+ }
+ }
+ },
+ "resources": [
+ ],
+ "outputs": {
+ "nameFromObject": {
+ "type": "string",
+ "value": "[parameters('exampleObject').name]"
+ }
+ }
+}
+```
+ ## Strings Strings are marked with double quotes.
azure-resource-manager Deployment Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md
Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 06/03/2022 Last updated : 06/27/2022 # Deletion of Azure resources for complete mode deployments
The resources are listed by resource provider namespace. To match a resource pro
> | alerts | No | > | alertsMetaData | No | > | migrateFromSmartDetection | No |
-> | prometheusRuleGroups | Yes |
> | smartDetectorAlertRules | Yes | > | smartGroups | No |
azure-resource-manager Template Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-deployment.md
Title: Template functions - deployment description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve deployment information. Previously updated : 03/10/2022 Last updated : 06/27/2022 # Deployment functions for ARM templates
Returns information about the Azure environment used for deployment.
In Bicep, use the [environment](../bicep/bicep-functions-deployment.md#environment) function.
+### Remarks
+
+To see a list of registered environments for your account, use [az cloud list](/cli/azure/cloud#az-cloud-list) or [Get-AzEnvironment](/powershell/module/az.accounts/get-azenvironment).
+ ### Return value This function returns properties for the current Azure environment. The following example shows the properties for global Azure. Sovereign clouds may return slightly different properties.
azure-vmware Configure Storage Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-storage-policy.md
You'll run the `Get-StoragePolicy` cmdlet to list the vSAN based storage policie
## Set storage policy on VM
-You'll run the `Set-VMStoragePolicy` cmdlet to Modify vSAN based storage policies on an individual VM or on a group of VMs sharing a similar VM name. For example, if you have 3 VMs named "MyVM1", "MyVM2", "MyVM3", supplying "MyVM*" to the VMName parameter would change the StoragePolicy on all three VMs.
+You'll run the `Set-VMStoragePolicy` cmdlet to modify vSAN-based storage policies on a default cluster, individual VM, or group of VMs sharing a similar VM name. For example, if you have three VMs named "MyVM1", "MyVM2", and "MyVM3", supplying "MyVM*" to the VMName parameter would change the StoragePolicy on all three VMs.
+ > [!NOTE] > You cannot use the vSphere Client to change the default storage policy or any existing storage policies for a VM.
azure-web-pubsub Tutorial Serverless Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-iot.md
Open function host index page: `http://localhost:7071/api/index` to view the rea
In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application. > [!div class="nextstepaction"]
-> [Tutorial: Create a simple chatroom with Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/create-a-chat-app/js-handle-events)
+> [Tutorial: Create a simple chatroom with Azure Web PubSub](/azure/azure-web-pubsub/tutorial-build-chat)
> [!div class="nextstepaction"]
-> [Azure Web PubSub bindings for Azure Functions](https://azure.github.io/azure-webpubsub/references/functions-bindings)
+> [Azure Web PubSub bindings for Azure Functions](/azure/azure-web-pubsub/reference-functions-bindings)
> [!div class="nextstepaction"]
-> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
cloud-services Cloud Services Troubleshoot Location Not Found For Role Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-location-not-found-for-role-size.md
Title: Troubleshoot LocationNotFoundForRoleSize when deploying a Cloud service (classic) to Azure | Microsoft Docs
+ Title: Troubleshoot allocation failures for Cloud service in Azure
description: This article shows how to resolve a LocationNotFoundForRoleSize exception when deploying a Cloud service (classic) to Azure. Previously updated : 02/22/2021 - Last updated : 06/06/2022 +
+- devx-track-azurepowershell
+- kr2b-contr-experiment
-# Troubleshoot LocationNotFoundForRoleSize when deploying a Cloud service (classic) to Azure
+# Troubleshoot LocationNotFoundForRoleSize when deploying a Cloud service to Azure
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-In this article, you'll troubleshoot allocation failures where a Virtual Machine (VM) size isn't available when you deploy an Azure Cloud service (classic).
+This article troubleshoots allocation failures where a virtual machine (VM) size isn't available when you deploy an Azure Cloud service (classic).
When you deploy instances to a Cloud service (classic) or add new web or worker role instances, Microsoft Azure allocates compute resources.
-You may occasionally receive errors during these operations even before you reach the Azure subscription limit.
+You might receive errors during these operations even before you reach the Azure subscription limit.
> [!TIP] > The information may also be useful when you plan the deployment of your services. ## Symptom
-In Azure portal, navigate to your Cloud service (classic) and in the sidebar select *Operation log (classic)* to view the logs.
+In the [Azure portal](https://portal.azure.com/), navigate to your Cloud service (classic) and in the sidebar select **Operation log (classic)** to view the logs.
-![Image shows the Operation log (classic) blade.](./media/cloud-services-troubleshoot-location-not-found-for-role-size/cloud-services-troubleshoot-allocation-logs.png)
-When you're inspecting the logs of your Cloud service (classic), you'll see the following exception:
+When you inspect the logs of your Cloud service (classic), you'll see the following exception:
|Exception Type |Error Message | |||
-|LocationNotFoundForRoleSize |The operation '`{Operation ID}`' failed: 'The requested VM tier is currently not available in Region (`{Region ID}`) for this subscription. Please try another tier or deploy to a different location.'.|
+|`LocationNotFoundForRoleSize` |The operation '`{Operation ID}`' failed: 'The requested VM tier is currently not available in Region (`{Region ID}`) for this subscription. Please try another tier or deploy to a different location.'.|
## Cause
-There's a capacity issue with the region or cluster that you're deploying to. The *LocationNotFoundForRoleSize* exception occurs when the resource SKU you've selected (VM size) isn't available for the region specified.
+There's a capacity issue with the region or cluster that you're deploying to. The `LocationNotFoundForRoleSize` exception occurs when the resource SKU you've selected, the virtual machine size, isn't available for the region specified.
-## Solution
+## Find SKUs in a region
-In this scenario, you should select a different region or SKU to deploy your Cloud service (classic) to. Before deploying or upgrading your Cloud service (classic), you can determine which SKUs are available in a region or availability zone. Follow the [Azure CLI](#list-skus-in-region-using-azure-cli), [PowerShell](#list-skus-in-region-using-powershell), or [REST API](#list-skus-in-region-using-rest-api) processes below.
+In this scenario, you should select a different region or SKU for your Cloud service (classic) deployment. Before you deploy or upgrade your Cloud service (classic), determine which SKUs are available in a region or availability zone. Follow the [Azure CLI](#list-skus-in-region-using-azure-cli), [PowerShell](#list-skus-in-region-using-powershell), or [REST API](#list-skus-in-region-using-rest-api) processes below.
### List SKUs in region using Azure CLI
-You can use the [az vm list-skus](/cli/azure/vm
-#az-vm-list-skus) command.
+You can use the [az vm list-skus](/cli/azure/vm#az-vm-list-skus) command.
- Use the `--location` parameter to filter output to location you're using. - Use the `--size` parameter to search by a partial size name. - For more information, see the [Resolve error for SKU not available](../azure-resource-manager/templates/error-sku-not-available.md#solution-2azure-cli) guide.
- **For example:**
+This sample command produces the following results:
- ```azurecli
- az vm list-skus --location southcentralus --size Standard_F --output table
- ```
+```azurecli
+az vm list-skus --location southcentralus --size Standard_F --output table
+```
- **Example results:**
- ![Azure CLI output of running the 'az vm list-skus --location southcentralus --size Standard_F --output table' command, which shows the available SKUs.](./media/cloud-services-troubleshoot-constrained-allocation-failed/cloud-services-troubleshoot-constrained-allocation-failed-1.png)
#### List SKUs in region using PowerShell
You can use the [Get-AzComputeResourceSku](/powershell/module/az.compute/get-azc
- You must have the latest version of PowerShell for this command. - For more information, see the [Resolve error for SKU not available](../azure-resource-manager/templates/error-sku-not-available.md#solution-1powershell) guide.
-**For example:**
+This command filters by location:
```azurepowershell Get-AzComputeResourceSku | where {$_.Locations -icontains "centralus"} ```
-**Some other useful commands:**
-
-Filter out the locations that contain size (Standard_DS14_v2):
+Find the locations that contain the size `Standard_DS14_v2`:
```azurepowershell Get-AzComputeResourceSku | where {$_.Locations.Contains("centralus") -and $_.ResourceType.Contains("virtualMachines") -and $_.Name.Contains("Standard_DS14_v2")} ```
-Filter out all the locations that contain size (V3):
+Find the locations that contain the size `V3`:
```azurepowershell Get-AzComputeResourceSku | where {$_.Locations.Contains("centralus") -and $_.ResourceType.Contains("virtualMachines") -and $_.Name.Contains("v3")} | fc
For more allocation failure solutions and to better understand how they're gener
> [!div class="nextstepaction"] > [Allocation failures - Cloud service (classic)](cloud-services-allocation-failures.md)
-If your Azure issue isn't addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select *Get support*.
+If your Azure issue isn't addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You also can submit an Azure support request. To submit a support request, on the [Azure support](https://azure.microsoft.com/support/options/) page, select **Get support**.
cognitive-services Concept Face Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-face-recognition.md
The recognition operations use mainly the following data structures. These objec
|[FaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b) or [LargeFaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc)| This data structure is an assorted list of PersistedFace objects. A FaceList has a unique ID, a name string, and optionally a user data string.| |[Person](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c)| This data structure is a list of PersistedFace objects that belong to the same person. It has a unique ID, a name string, and optionally a user data string.| |[PersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) or [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d)| This data structure is an assorted list of Person objects. It has a unique ID, a name string, and optionally a user data string. A PersonGroup must be [trained](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249) before it can be used in recognition operations.|
-|PersonDirectory | This data structure is like **LargePersonGroup** but offers additional storage capacity and other added features. For more information, see [Use the PersonDirectory structure](./how-to/use-persondirectory.md).
+|PersonDirectory | This data structure is like **LargePersonGroup** but offers additional storage capacity and other added features. For more information, see [Use the PersonDirectory structure (preview)](./how-to/use-persondirectory.md).
## Recognition operations
cognitive-services Add Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/add-faces.md
The following features were explained and demonstrated:
In this guide, you learned how to add face data to a **PersonGroup**. Next, learn how to use the enhanced data structure **PersonDirectory** to do more with your face data. -- [Use the PersonDirectory structure](use-persondirectory.md)
+- [Use the PersonDirectory structure (preview)](use-persondirectory.md)
cognitive-services Use Persondirectory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-persondirectory.md
ms.devlang: csharp
-# Use the PersonDirectory structure
+# Use the PersonDirectory structure (preview)
[!INCLUDE [Gate notice](../includes/identity-gate-notice.md)]
-To perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory.
+To perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure in Public Preview that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory.
Currently, the Face API offers the **LargePersonGroup** structure, which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/whats-new.md
The Computer Vision API v3.2 is now generally available with the following updat
> [!div class="nextstepaction"] > [See Computer Vision v3.2 GA](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
-### PersonDirectory data structure
+### PersonDirectory data structure (preview)
* In order to perform face recognition operations such as Identify and Find Similar, Face API customers need to create an assorted list of **Person** objects. The new **PersonDirectory** is a data structure that contains unique IDs, optional name strings, and optional user metadata strings for each **Person** identity added to the directory. Currently, the Face API offers the **LargePersonGroup** structure which has similar functionality but is limited to 1 million identities. The **PersonDirectory** structure can scale up to 75 million identities. * Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train calls after adding faces to a **Person** object&mdash;the update process happens automatically. For more details see [Use the PersonDirectory structure](how-to/use-persondirectory.md).
cognitive-services How To Custom Commands Setup Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-speech-sdk.md
You'll also need:
## Step 2: Create a Visual Studio project
-Create a Visual Studio project for UWP development and [install the Speech SDK](/quickstarts/setup-platform.md?pivots=programming-language-csharp&tabs=uwp).
+Create a Visual Studio project for UWP development and [install the Speech SDK](/azure/cognitive-services/speech-service/quickstarts/setup-platform?pivots=programming-language-csharp&tabs=uwp).
## Step 3: Add sample code
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
To copy your custom neural voice model to another project:
1. Select **View model** under the notification message for copy success. 1. On the **Train model** tab, select the newly copied model and then select **Deploy model**.
+## Switch to a new voice model in your product
+
+Once you've updated your voice model to the latest engine version, or if you want to switch to a new voice in your product, you need to redeploy the new voice model to a new endpoint. Redeploying new voice model on your existing endpoint is not supported. After deployment, switch the traffic to the newly created endpoint. We recommend that you transfer the traffic to the new endpoint in a test environment first to ensure that the traffic works well, and then transfer to the new endpoint in the production environment. During the transition, you need to keep the old endpoint. If there are some problems with the new endpoint during transition, you can switch back to your old endpoint. If the traffic has been running well on the new endpoint for about 24 hours (recommended value), you can delete your old endpoint.
+
+> [!NOTE]
+> If your voice name is changed and you are using Speech Synthesis Markup Language (SSML), be sure to use the new voice name in SSML.
+ ## Suspend and resume an endpoint You can suspend or resume an endpoint, to limit spend and conserve resources that aren't in use. You won't be charged while the endpoint is suspended. When you resume an endpoint, you can use the same endpoint URL in your application to synthesize speech.
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md
You can use any text editor to write Go applications. We recommend using the lat
> > If you're new to Go, try the [**Get started with Go**](/learn/modules/go-get-started/) Microsoft Learn module.
-1. If you haven't done so already, [download and install Go](https://go.dev/doc/install]).
+1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
* Download the Go version for your operating system. * Once the download is complete, run the installer.
cognitive-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md
You can use any text editor to write Go applications. We recommend using the lat
> > If you're new to Go, try the [**Get started with Go**](/learn/modules/go-get-started/) Microsoft Learn module.
-1. If you haven't done so already, [download and install Go](https://go.dev/doc/install]).
+1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
* Download the Go version for your operating system. * Once the download is complete, run the installer.
If you're encountering connection issues, it may be that your TLS/SSL certificat
## Next steps > [!div class="nextstepaction"]
-> [Customize and improve translation](customization.md)
+> [Customize and improve translation](customization.md)
cognitive-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/authentication.md
Each request to an Azure Cognitive Service must include an authentication header. This header passes along a subscription key or access token, which is used to validate your subscription for a service or group of services. In this article, you'll learn about three ways to authenticate a request and the requirements for each. * Authenticate with a [single-service](#authenticate-with-a-single-service-subscription-key) or [multi-service](#authenticate-with-a-multi-service-subscription-key) subscription key
-* Authenticate with a [token](#authenticate-with-an-authentication-token)
+* Authenticate with a [token](#authenticate-with-an-access-token)
* Authenticate with [Azure Active Directory (AAD)](#authenticate-with-azure-active-directory) ## Prerequisites
Let's quickly review the authentication headers available for use with Azure Cog
|--|-| | Ocp-Apim-Subscription-Key | Use this header to authenticate with a subscription key for a specific service or a multi-service subscription key. | | Ocp-Apim-Subscription-Region | This header is only required when using a multi-service subscription key with the [Translator service](./Translator/reference/v3-0-reference.md). Use this header to specify the subscription region. |
-| Authorization | Use this header if you are using an authentication token. The steps to perform a token exchange are detailed in the following sections. The value provided follows this format: `Bearer <TOKEN>`. |
+| Authorization | Use this header if you are using an access token. The steps to perform a token exchange are detailed in the following sections. The value provided follows this format: `Bearer <TOKEN>`. |
## Authenticate with a single-service subscription key
curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-versio
--data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp ```
-## Authenticate with an authentication token
+## Authenticate with an access token
-Some Azure Cognitive Services accept, and in some cases require, an authentication token. Currently, these services support authentication tokens:
+Some Azure Cognitive Services accept, and in some cases require, an access token. Currently, these services support access tokens:
* Text Translation API * Speech
Some Azure Cognitive Services accept, and in some cases require, an authenticati
> QnA Maker also uses the Authorization header, but requires an endpoint key. For more information, see [QnA Maker: Get answer from knowledge base](./qnamaker/quickstarts/get-answer-from-knowledge-base-using-url-tool.md). >[!WARNING]
-> The services that support authentication tokens may change over time, please check the API reference for a service before using this authentication method.
+> The services that support access tokens may change over time, please check the API reference for a service before using this authentication method.
-Both single service and multi-service subscription keys can be exchanged for authentication tokens. Authentication tokens are valid for 10 minutes.
+Both single service and multi-service subscription keys can be exchanged for access tokens in JSON Web Token (JWT) format. Access tokens are valid for 10 minutes.
-Authentication tokens are included in a request as the `Authorization` header. The token value provided must be preceded by `Bearer`, for example: `Bearer YOUR_AUTH_TOKEN`.
+Access tokens are included in a request as the `Authorization` header. The token value provided must be preceded by `Bearer`, for example: `Bearer YOUR_AUTH_TOKEN`.
### Sample requests
-Use this URL to exchange a subscription key for an authentication token: `https://YOUR-REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken`.
+Use this URL to exchange a subscription key for an access token: `https://YOUR-REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken`.
```cURL curl -v -X POST \
These multi-service regions support token exchange:
- `westus` - `westus2`
-After you get an authentication token, you'll need to pass it in each request as the `Authorization` header. This is a sample call to the Translator service:
+After you get an access token, you'll need to pass it in each request as the `Authorization` header. This is a sample call to the Translator service:
```cURL curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/overview.md
Previously updated : 11/02/2021 Last updated : 06/15/2022
This documentation contains the following types of articles:
The result will be a collection of recognized entities in your text, with URLs to Wikipedia as an online knowledge base. + ## Responsible AI An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for entity linking](/legal/cognitive-services/language-service/transparency-note?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/overview.md
Previously updated : 11/02/2021 Last updated : 06/15/2022
This documentation contains the following types of articles:
[!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)] + ## Deploy on premises using Docker containers Use the available Docker container to [deploy this feature on-premises](how-to/use-containers.md). These docker containers enable you to bring the service closer to your data for compliance, security, or other operational reasons.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/language-support.md
Previously updated : 11/02/2021 Last updated : 06/27/2022
Use this article to learn which natural languages are supported by the NER featu
| Finnish* | `fi` | 2019-10-01 | | | French | `fr` | 2021-01-15 | | | German | `de` | 2021-01-15 | |
-| Hebrew* | `he` | 2019-10-01 | |
| Hungarian* | `hu` | 2019-10-01 | | | Italian | `it` | 2021-01-15 | | | Japanese | `ja` | 2021-01-15 | |
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/overview.md
Previously updated : 11/02/2021 Last updated : 06/15/2022
Named Entity Recognition (NER) is one of the features offered by [Azure Cognitiv
[!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)] + ## Responsible AI An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for NER](/legal/cognitive-services/language-service/transparency-note-named-entity-recognition?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
Azure Cognitive Service for Language provides the following features:
> | [Key phrase extraction](key-phrase-extraction/overview.md) | This pre-configured feature evaluates unstructured text, and for each input document, returns a list of key phrases and main points in the text. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](key-phrase-extraction/quickstart.md) <br> * [Docker container](key-phrase-extraction/how-to/use-containers.md) | > |[Entity linking](entity-linking/overview.md) | This pre-configured feature disambiguates the identity of an entity found in text and provides links to the entity on Wikipedia. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](entity-linking/quickstart.md) | > | [Text Analytics for health](text-analytics-for-health/overview.md) | This pre-configured feature extracts information from unstructured medical texts, such as clinical notes and doctor's notes. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](text-analytics-for-health/quickstart.md) <br> * [Docker container](text-analytics-for-health/how-to/use-containers.md) |
-> | [Custom NER](custom-named-entity-recognition/overview.md) | Build an AI model to extract custom entity categories, using unstructured text that you provide. | * [Language Studio](custom-named-entity-recognition/quickstart.md?pivots=language-studio) <br> * [REST API](custom-named-entity-recognition/quickstart.md?pivots=rest-api) |
+> | [Custom NER](custom-named-entity-recognition/overview.md) | Build an AI model to extract custom entity categories, using unstructured text that you provide. | * [Language Studio](custom-named-entity-recognition/quickstart.md?pivots=language-studio) <br> * [REST API](custom-named-entity-recognition/quickstart.md?pivots=rest-api)<br> * [client-library (prediction only)](custom-named-entity-recognition/how-to/call-api.md) |
> | [Analyze sentiment and opinions](sentiment-opinion-mining/overview.md) | This pre-configured feature provides sentiment labels (such as "*negative*", "*neutral*" and "*positive*") for sentences and documents. This feature can additionally provide granular information about the opinions related to words that appear in the text, such as the attributes of products or services. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](sentiment-opinion-mining/quickstart.md) <br> * [Docker container](sentiment-opinion-mining/how-to/use-containers.md) > |[Language detection](language-detection/overview.md) | This pre-configured feature evaluates text, and determines the language it was written in. It returns a language identifier and a score that indicates the strength of the analysis. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](language-detection/quickstart.md) <br> * [Docker container](language-detection/how-to/use-containers.md) |
-> |[Custom text classification](custom-classification/overview.md) | Build an AI model to classify unstructured text into custom classes that you define. | * [Language Studio](custom-classification/quickstart.md?pivots=language-studio)<br> * [REST API](language-detection/quickstart.md?pivots=rest-api) |
+> |[Custom text classification](custom-classification/overview.md) | Build an AI model to classify unstructured text into custom classes that you define. | * [Language Studio](custom-classification/quickstart.md?pivots=language-studio)<br> * [REST API](custom-classification/quickstart.md?pivots=rest-api) <br> * [client-library (prediction only)](custom-text-classification/how-to/call-api.md) |
> | [Document summarization (preview)](summarization/overview.md?tabs=document-summarization) | This pre-configured feature extracts key sentences that collectively convey the essence of a document. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](summarization/quickstart.md) | > | [Conversation summarization (preview)](summarization/overview.md?tabs=conversation-summarization) | This pre-configured feature summarizes issues and summaries in transcripts of customer-service conversations. | * [Language Studio](language-studio.md) <br> * [REST API](summarization/quickstart.md?tabs=rest-api) |
-> | [Conversational language understanding](conversational-language-understanding/overview.md) | Build an AI model to bring the ability to understand natural language into apps, bots, and IoT devices. | * [Language Studio](conversational-language-understanding/quickstart.md)
+> | [Conversational language understanding](conversational-language-understanding/overview.md) | Build an AI model to bring the ability to understand natural language into apps, bots, and IoT devices. | * [Language Studio](conversational-language-understanding/quickstart.md?pivots=language-studio) <br> * [REST API](conversational-language-understanding/quickstart.md?pivots=rest-api) <br> * [client-library (prediction only)](conversational-language-understanding/how-to/call-api.md) |
> | [Question answering](question-answering/overview.md) | This pre-configured feature provides answers to questions extracted from text input, using semi-structured content such as: FAQs, manuals, and documents. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](question-answering/quickstart/sdk.md) |
-> | [Orchestration workflow](orchestration-workflow/overview.md) | Train language models to connect your applications to question answering, conversational language understanding, and LUIS | * [Language Studio](orchestration-workflow/quickstart.md?pivots=language-studio) <br> * [REST API](orchestration-workflow/quickstart.md?pivots=rest-api) |
+> | [Orchestration workflow](orchestration-workflow/overview.md) | Train language models to connect your applications to question answering, conversational language understanding, and LUIS | * [Language Studio](orchestration-workflow/quickstart.md?pivots=language-studio) <br> * [REST API](orchestration-workflow/quickstart.md?pivots=rest-api) <br> * [client-library (prediction only)](orchestration-workflow/how-to/call-api.md) |
## Tutorials
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/overview.md
Previously updated : 11/02/2021 Last updated : 06/15/2022
PII detection is one of the features offered by [Azure Cognitive Service for Lan
[!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)] + ## Responsible AI An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for PII](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/overview.md
Previously updated : 11/02/2021 Last updated : 06/15/2022
Opinion mining is a feature of sentiment analysis. Also known as aspect-based se
An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for sentiment analysis](/legal/cognitive-services/language-service/transparency-note-sentiment-analysis?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information: ## Next steps
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/overview.md
Previously updated : 03/01/2022 Last updated : 06/15/2022
To use this feature, you submit raw unstructured text for analysis and handle th
* Text Analytics for health takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) for more information. * Text Analytics for health works with a variety of written languages. See [language support](language-support.md) for more information.
-## Reference documentation and code samples
-As you use Text Analytics for health in your applications, see the following reference documentation and samples for Azure Cognitive Services for Language:
-
-|Development option / language |Reference documentation |Samples |
-||||
-|REST API | [REST API documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) | |
-|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
-| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
-|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
-|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
## Responsible AI
communication-services Connect Email Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/connect-email-communication-resource.md
In this quick start, you'll learn about how to connect a verified domain in Azur
- Verified Domain :::image type="content" source="./media/email-domains-connect-select.png" alt-text="Screenshot that shows how to filter and select one of the verified email domains to connect." lightbox="media/email-domains-connect-select-expanded.png":::
+> [!Note]
+> We allow only connecting the domains in the same geography. Please ensure that Data location for Communication Resource and Email Communication Resource that was selected during resource creation are the same.
+ 5. Click Connect :::image type="content" source="./media/email-domains-connected.png" alt-text="Screenshot that shows one of the verified email domain is now connected." lightbox="media/email-domains-connected-expanded.png":::
communication-services Get Started Raw Media Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-raw-media-access.md
[!INCLUDE [Public Preview](../../includes/public-preview-include-document.md)]
-In this quickstart, you'll learn how implement raw media access using the Azure Communication Services Calling SDK for Android.
+In this quickstart, you'll learn how to implement raw media access using the Azure Communication Services Calling SDK for Android.
The Azure Communication Services Calling SDK offers APIs allowing apps to generate their own video frames to send to remote participants.
Since the app will be generating the video frames, the app must inform the Azure
The app must register a delegate to get notified about when it should start or stop producing video frames. The delegate event will inform the app which video format is more appropriate for the current network conditions.
+### Supported Video Resolutions
+
+| Aspect Ratio | Resolution | Maximum FPS |
+| :--: | :-: | :-: |
+| 16x9 | 1080p | 30 |
+| 16x9 | 720p | 30 |
+| 16x9 | 540p | 30 |
+| 16x9 | 480p | 30 |
+| 16x9 | 360p | 30 |
+| 16x9 | 270p | 15 |
+| 16x9 | 240p | 15 |
+| 16x9 | 180p | 15 |
+| 4x3 | VGA (640x480) | 30 |
+| 4x3 | 424x320 | 15 |
+| 4x3 | QVGA (320x240) | 15 |
+| 4x3 | 212x160 | 15 |
+ The following is an overview of the steps required to create a virtual video stream.
-1. Create an array of `VideoFormat` with the video formats supported by the app. It is fine to have only one video format supported, but at least one of the provided video formats must be of the `VideoFrameKind::VideoSoftware` type. When multiple formats are provided, the order of the format in the list does not influence or prioritize which one will be used. The selected format is based on external factors like network bandwidth.
+1. Create an array of `VideoFormat` with the video formats supported by the app. It is fine to have only one video format supported, but at least one of the provided video formats must be of the `VideoFrameKind::VideoSoftware` type. When multiple formats are provided, the order of the format in the list doesn't influence or prioritize which one will be used. The selected format is based on external factors like network bandwidth.
```java ArrayList<VideoFormat> videoFormats = new ArrayList<VideoFormat>();
The following is an overview of the steps required to create a virtual video str
rawOutgoingVideoStreamOptions.setVideoFormats(videoFormats); ```
-3. Subscribe to `RawOutgoingVideoStreamOptions::addOnOutgoingVideoStreamStateChangedListener` delegate. This delegate will inform the state of the current stream, its important that you do not send frames if the state is no equal to `OutgoingVideoStreamState.STARTED`.
+3. Subscribe to `RawOutgoingVideoStreamOptions::addOnOutgoingVideoStreamStateChangedListener` delegate. This delegate will inform the state of the current stream, it's important that you don't send frames if the state is no equal to `OutgoingVideoStreamState.STARTED`.
```java private OutgoingVideoStreamState outgoingVideoStreamState;
Repeat steps `1 to 4` from the previous VirtualRawOutgoingVideoStream tutorial.
Since the Android system generates the frames, you must implement your own foreground service to capture the frames and send them through using our Azure Communication Services Calling API
+### Supported Video Resolutions
+
+| Aspect Ratio | Resolution | Maximum FPS |
+| :--: | :-: | :-: |
+| Anything | Anything | 30 |
+ The following is an overview of the steps required to create a screen share video stream. 1. Add this permission to your `Manifest.xml` file inside your Android project
The following is an overview of the steps required to create a screen share vide
screenShareRawOutgoingVideoStream = new ScreenShareRawOutgoingVideoStream(rawOutgoingVideoStreamOptions); ```
-3. Request needed permissions for screen capture on Android, once this method is called Android will call automatically `onActivityResult` containing the request code we have sent and the result of the operation, expect `Activity.RESULT_OK` if the permission has been provided by the user if so attach the screenShareRawOutgoingVideoStream to the call and start your own foreground service to capture the frames.
+3. Request needed permissions for screen capture on Android, once this method is called Android will call automatically `onActivityResult` containing the request code we've sent and the result of the operation, expect `Activity.RESULT_OK` if the permission has been provided by the user if so attach the screenShareRawOutgoingVideoStream to the call and start your own foreground service to capture the frames.
```java public void GetScreenSharePermissions() {
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
As you create a custom VNET, keep in mind the following situations:
- You can define the subnet range used by the Container Apps environment. - Once the environment is created, the subnet range is immutable.
- - A single load balancer and single Kubernetes service are associated with each container apps environment.
- Each [revision](revisions.md) is assigned an IP address in the subnet. - You can restrict inbound requests to the environment exclusively to the VNET by deploying the environment as [internal](vnet-custom-internal.md).
cosmos-db Use Regional Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/use-regional-endpoints.md
Global database account CNAME always points to a valid write region. During serv
The easiest way to get the list of regions for Azure Cosmos DB Graph account is overview blade in Azure portal. It will work for applications that do not change regions often, or have a way to update the list via application configuration. Example below demonstrates general principles of accessing regional Gremlin endpoint. Application should consider number of regions to send the traffic to and number of corresponding Gremlin clients to instantiate.
cosmos-db Create Sql Api Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-go.md
go get github.com/Azure/azure-sdk-for-go/sdk/data/azcosmos
**Authenticate the client** ```go
- var endpoint = "<azure_cosmos_uri>"
- var key = "<azure_cosmos_primary_key"
-
- cred, err := azcosmos.NewKeyCredential(key)
- if err != nil {
- log.Fatal("Failed to create a credential: ", err)
- }
-
- // Create a CosmosDB client
- client, err := azcosmos.NewClientWithKey(endpoint, cred, nil)
- if err != nil {
- log.Fatal("Failed to create cosmos client: ", err)
- }
-
- // Create database client
- databaseClient, err := client.NewDatabase("<databaseName>")
- if err != nil {
- log.fatal("Failed to create database client:", err)
- }
-
- // Create container client
- containerClient, err := client.NewContainer("<databaseName>", "<containerName>")
- if err != nil {
- log.fatal("Failed to create a container client:", err)
- }
+var endpoint = "<azure_cosmos_uri>"
+var key = "<azure_cosmos_primary_key"
+
+cred, err := azcosmos.NewKeyCredential(key)
+if err != nil {
+ log.Fatal("Failed to create a credential: ", err)
+}
+
+// Create a CosmosDB client
+client, err := azcosmos.NewClientWithKey(endpoint, cred, nil)
+if err != nil {
+ log.Fatal("Failed to create cosmos client: ", err)
+}
+
+// Create database client
+databaseClient, err := client.NewDatabase("<databaseName>")
+if err != nil {
+ log.fatal("Failed to create database client:", err)
+}
+
+// Create container client
+containerClient, err := client.NewContainer("<databaseName>", "<containerName>")
+if err != nil {
+ log.fatal("Failed to create a container client:", err)
+}
``` **Create a Cosmos database**
databaseProperties := azcosmos.DatabaseProperties{ID: "<databaseName>"}
databaseResp, err := client.CreateDatabase(context.TODO(), databaseProperties, nil) if err != nil {
- panic(err)
+ log.Fatal(err)
} ```
if err != nil {
```go database, err := client.NewDatabase("<databaseName>") //returns struct that represents a database. if err != nil {
- panic(err)
+ log.Fatal(err)
} properties := azcosmos.ContainerProperties{
properties := azcosmos.ContainerProperties{
resp, err := database.CreateContainer(context.TODO(), properties, nil) if err != nil {
- panic(err)
+ log.Fatal(err)
} ```
if err != nil {
```go container, err := client.NewContainer("<databaseName>", "<containerName>") if err != nil {
- panic(err)
+ log.Fatal(err)
} pk := azcosmos.NewPartitionKeyString("personal") //specifies the value of the partition key
item := map[string]interface{}{
marshalled, err := json.Marshal(item) if err != nil {
- panic(err)
+ log.Fatal(err)
} itemResponse, err := container.CreateItem(context.TODO(), pk, marshalled, nil) if err != nil {
- panic(err)
+ log.Fatal(err)
} ```
if err != nil {
```go getResponse, err := container.ReadItem(context.TODO(), pk, "1", nil) if err != nil {
- panic(err)
+ log.Fatal(err)
} var getResponseBody map[string]interface{}
-err = json.Unmarshal([]byte(getResponse.Value), &getResponseBody)
+err = json.Unmarshal(getResponse.Value, &getResponseBody)
if err != nil {
- panic(err)
+ log.Fatal(err)
} fmt.Println("Read item with Id 1:")
for key, value := range getResponseBody {
```go delResponse, err := container.DeleteItem(context.TODO(), pk, "1", nil)- if err != nil {
- panic(err)
+ log.Fatal(err)
} ```
cost-management-billing Ea Portal Agreements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-agreements.md
Title: Azure EA agreements and amendments
description: This article explains how Azure EA agreements and amendments affect your Azure EA portal use. Previously updated : 04/28/2022 Last updated : 06/27/2022
The start date of a new Azure Prepayment (previously called monetary commitment)
An enrollment has one of the following status values. Each value determines how you can use and access an enrollment. The enrollment status determines at which stage your enrollment is. It tells you if the enrollment needs to be activated before it can be used. Or, if the initial period has expired and you're charged for usage overage.
-**Pending** - The enrollment administrator needs to sign in to the Azure EA portal. Once signed in, the enrollment switches to **Active** status.
+**Pending** - The enrollment administrator needs to sign in to the Azure EA portal. After the administrator signs in, the enrollment switches to **Active** status.
**Active** - The enrollment is accessible and usable. You can create accounts and subscriptions in the Azure EA portal. Direct customers can create departments, accounts and subscriptions in the [Azure portal](https://portal.azure.com). The enrollment remains active until the enterprise agreement end date. **Indefinite Extended Term** - Indefinite extended term status occurs after the enterprise agreement end date is reached and is expired. When an agreement enters into an extended term, it doesn't receive discounted pricing. Instead, pricing is at retail rates. Before the EA enrollment reaches the enterprise agreement end date, the Enrollment Administrator should decide to: -- Renew the enrollment by adding additional Azure Prepayment
+- Renew the enrollment by adding more Azure Prepayment
- Transfer the existing enrollment to a new enrollment - Migrate to the Microsoft Online Subscription Program (MOSP) - Confirm disablement of all services associated with the enrollment
-EA credit expires when the EA enrollment ends.
+EA credit expires when the EA enrollment ends for all programs except the EU program.
**Expired** - The EA enrollment expires when it reaches the enterprise agreement end date and is opted out of the extended term. Sign a new enrollment contract as soon as possible. Although your service won't be disabled immediately, there's a risk of it getting disabled.
The LSP provides a single percentage number in the EA portal.  All commercial i
### When to use a markup
-Use the feature if you set the same markup percentage on ALL commercial transactions in the EA. i.e. ΓÇô if you mark-up the Azure Prepayment information, the meter rates, the order information, etc.
+Use the feature if you set the same markup percentage on *all* commercial transactions in the EA. For example, if you mark-up the Azure Prepayment information, the meter rates, the order information, and so on.
Don't use the markup feature if: - You use different rates between Azure Prepayment and meter rates. - You use different rates for different meters.
-If you're using different rates for different meters, we recommend developing a custom solution based on the API Key, which can be provided by the customer, to pull consumption data and provide reports.
+If you're using different rates for different meters, we recommend developing a custom solution based on the API key. The API key can be provided by the customer to pull consumption data and provide reports.
### Other important information This feature is meant to provide an estimation of the Azure cost to the end customer. The LSP is responsible for all financial transactions with the customer under the EA.
-Please make sure to review the commercial information - monetary balance information, price list, etc. before publishing the marked-up prices to end customer.
+Make sure to review the commercial information - monetary balance information, price list, etc. before publishing the marked-up prices to end customer.
### How to add a price markup
Review the markup price in the _Usage Summary_ for the Prepayment term in the cu
1. Review the prices in the price sheet. 1. Changes can be made before publishing by selecting **Edit** on _View Usage Summary > Customer View_ tab.  
-Both the service prices and the Prepayment balances will be marked up by the same percentages. If you have different percentages for monetary balance and meter rates, or different percentages for different services, then please don't use this feature.
+Both the service prices and the Prepayment balances will be marked up by the same percentages. If you have different percentages for monetary balance and meter rates, or different percentages for different services, then don't use this feature.
**Step Three: Publish** After pricing is reviewed and validated, select **Publish**.  
-Pricing with markup will be available to enterprise administrators immediately after selecting publish. Edits can't be made to markup. You must disable markup and begin from Step One.
+Pricing with markup will be available to enterprise administrators immediately after selecting publish. Edits can't be made to markup. You must disable markup and begin from the first step.
### Which enrollments have a markup enabled?
Partners can use the markup feature (on Azure EA) after a Change of Channel Part
| **Resource** | **Default Quota** | **Comments** | | | | |
-| Microsoft Azure Compute Instances | 20 concurrent small compute instances or their equivalent of the other compute instance sizes. | The following table provides how to calculate the equivalent number of small instances:<ul><li> Extra Small - 1 equivalent small instance </li><li> Small - 1 equivalent small instance </li><li> Medium - 2 equivalent small instances </li><li> Large - 4 equivalent small instances </li><li> Extra Large - 8 equivalent small instances </li> </ul>|
-| Microsoft Azure Compute Instances v2 VM's | EA: 350 Cores | GA IaaS v2 VMs:<ul><li> A0\_A7 family - 350 cores </li><li> B\_A0\_A4 family - 350 cores </li><li> A8\_A9 family - 350 cores </li><li> DF family - 350 cores</li><li> GF - 350 cores </li></ul>|
-| Microsoft Azure Hosted Services | 6 hosted services | This limit of hosted services cannot be increased beyond six for an individual subscription. If you require additional hosted services, please add additional subscriptions. |
-| Microsoft Azure Storage | 5 storage accounts, each of a maximum size of 100 TB each. | You can increase the number of storage accounts to up to 20 per subscription. If you require additional storage accounts, please add additional subscriptions. |
-| SQL Azure | 149 databases of either type (i.e., Web Edition or Business Edition). | |
+| Microsoft Azure Compute Instances | 20 concurrent small compute instances or their equivalent of the other compute instance sizes. | The following table provides how to calculate the equivalent number of small instances:<ul><li> Extra Small - one equivalent small instance </li><li> Small - one equivalent small instance </li><li> Medium - two equivalent small instances </li><li> Large - four equivalent small instances </li><li> Extra Large - eight equivalent small instances </li> </ul>|
+| Microsoft Azure Compute Instances v2 VMs | EA: 350 Cores | GA IaaS v2 VMs:<ul><li> A0\_A7 family - 350 cores </li><li> B\_A0\_A4 family - 350 cores </li><li> A8\_A9 family - 350 cores </li><li> DF family - 350 cores</li><li> GF - 350 cores </li></ul>|
+| Microsoft Azure Hosted Services | six hosted services | This limit of hosted services can't be increased beyond six for an individual subscription. If you require more hosted services, add more subscriptions. |
+| Microsoft Azure Storage | Five storage accounts, each of a maximum size of 100 TB each. | You can increase the number of storage accounts to up to 20 per subscription. If you require more storage accounts, add more subscriptions. |
+| SQL Azure | 149 databases of either type (for example, Web Edition or Business Edition). | |
| Access Control | 50 Namespaces per account. 100 million Access Control transactions per month | | | Service Bus | 50 Namespaces per account. 40 Service Bus connections | Customers purchasing Service Bus connections through connection packs will have quotas equal to the midpoint between the connection pack they purchased and the next highest connection pack amount. Customers choosing a 500 Pack will have a quota of 750. | ## Resource Prepayment
-Microsoft will provide services to you up to at least the level of the associated usage included in the monthly Prepayment that you purchased (the Service Prepayment), but all other increases in usage levels of service resources (e.g. adding to the number of compute instances running, or increasing the amount of storage in use) are subject to the availability of these service resources.
+Microsoft will provide services to you up to at least the level of the associated usage included in the monthly Prepayment that you purchased (the Service Prepayment). However, all other increases in usage levels of service resources are subject to the availability of these service resources. For example, adding to the number of compute instances running, or increasing the amount of storage in use.
-Any quota described above is not a Service Prepayment. For purposes of determining the number of simultaneous small compute instances (or their equivalent) that Microsoft will provide as part of a Service Prepayment, this is determined by dividing the number of committed small compute instance hours purchased for a month by the number of hours in the shortest month of the year (i.e., February ΓÇô 672 hours).
+Quotas described above aren't Service Prepayment. You can determine the number of simultaneous small compute instances, or their equivalent, that Microsoft provides as part of a Service Prepayment. Divide the number of committed small compute instance hours purchased for a month by the number of hours in the shortest month of the year. For example, February ΓÇô 672 hours.
## Requesting a quota increase You can request a quota increase at any time by submitting an [online request](https://portal.azure.com/). To process your request, provide the following information: -- The Microsoft account or work or school account associated with the account owner of your subscription. This is the email address utilized to sign in to the Microsoft Azure portal to manage your subscription(s). Please also identify that this account is associated with an EA enrollment.
+- The Microsoft account or work or school account associated with the account owner of your subscription. It's the email address used to sign in to the Microsoft Azure portal to manage your subscription(s). Verify that the account is associated with an EA enrollment.
- The resource(s) and amount for which you desire a quota increase. - The Azure Developer Portal Subscription ID associated with your service.
- - For information on how to obtain your subscription ID, please [contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+ - For information on how to obtain your subscription ID, [contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
## Plan SKUs Plan SKUs offer the ability to purchase a suite of integrated services together at a discounted rate. The Plan SKUs are designed to complement each other through further integrated offerings and suite for greater cost savings.
-One example would be the Operations Management Suite (OMS) subscription. OMS offers a simple way to access a full set of cloud-based management capabilities, including analytics, configuration, automation, security, backup, and disaster recovery. OMS subscriptions include rights to System Center components to provide a complete solution for hybrid cloud environments.
+One example would be the Operations Management Suite (OMS) subscription. OMS offers a simple way to access a full set of cloud-based management capabilities. It includes analytics, configuration, automation, security, backup, and disaster recovery. OMS subscriptions include rights to System Center components to provide a complete solution for hybrid cloud environments.
-Enterprise Administrators can assign Account Owners to provision previously purchased Plan SKUs in the Enterprise Portal by following these steps:
+Enterprise Administrators can assign Account Owners to prepare previously purchased Plan SKUs in the Enterprise portal by following these steps:
### View the price sheet to check included quantity 1. Sign in as an Enterprise Administrator. 1. Select **Reports** on the left navigation. 1. Select the **Price Sheet** tab.
-1. Select the 'Download' icon in the top-right corner.
-1. Find the corresponding Plan SKU part numbers with filter on column "Included Quantity" and select values greater than "0".
+1. Select the **Download** symbol in the top-right corner of the page.
+1. Find the corresponding Plan SKU part numbers with filter on column **Included Quantity** and select values greater than 0 (zero).
Direct customer can view price sheet in Azure portal. See [view price sheet in Azure portal](ea-pricing.md#download-pricing-for-an-enterprise-agreement).
Direct customer can view price sheet in Azure portal. See [view price sheet in A
1. Select **+Add Subscription**. 1. Select **Purchase**.
-The first time you add a subscription to an account, you'll need to provide your contact information. When adding later subscriptions, your contact information will be populated for you.
+The first time you add a subscription to an account, you'll need to provide your contact information. When you add more subscriptions later, your contact information will be populated for you.
-The first time you add a subscription to your account, you'll be asked to accept the MOSA agreement and a Rate Plan. These sections are NOT Applicable to Enterprise Agreement Customers, but are currently necessary to provision your subscription. Your Microsoft Azure Enterprise Agreement Enrollment Amendment supersedes the above items and your contractual relationship won't change. Please check the box indicating you accept the terms.
+The first time you add a subscription to your account, you'll be asked to accept the MOSA agreement and a Rate Plan. These sections aren't Applicable to Enterprise Agreement Customers, but are currently necessary to create your subscription. Your Microsoft Azure Enterprise Agreement Enrollment Amendment supersedes the above items and your contractual relationship won't change. Select the box indicating you accept the terms.
**Step Two: Update subscription name**
-All new subscriptions will be added with the default "Microsoft Azure Enterprise" subscription name. It's important to update the subscription name to differentiate it from the other subscriptions within your Enterprise Enrollment and ensure that it's recognizable on reports at the enterprise level.
+All new subscriptions will be added with the default *Microsoft Azure Enterprise* subscription name. It's important to update the subscription name to differentiate it from the other subscriptions within your Enterprise Enrollment and ensure that it's recognizable on reports at the enterprise level.
Select **Subscriptions**, select the subscription you created, and then select **Edit Subscription Details.**
Direct customer can create and edit subscription in Azure portal. See [manage su
**Account owner showing in pending status**
-When new Account Owners (AO) are added to the enrollment for the first time, they'll always show as "pending" under status. Upon receiving the activation welcome email, the AO can sign in to activate their account. This activation will update their account status from "pending" to "active".
+When new Account Owners (AO) are added to the enrollment for the first time, they'll always show as `pending` under status. When you receive the activation welcome email, the AO can sign in to activate their account. This activation will update their account status from `pending` to `active`.
**Usages being charged after Plan SKUs are purchased** This scenario occurs when the customer has deployed services under the wrong enrollment number or selected the wrong services.
-To validate if you're deploying under the right enrollment, you can check your included units information via the price sheet. Please sign in as an Enterprise Administrator and select **Reports** on the left navigation and select **Price Sheet** tab. Select the Download symbol in the top-right corner and find the corresponding Plan SKU part numbers with filter on column "Included Quantity" and select values greater than "0".
+To validate if you're deploying under the right enrollment, you can check your included units information via the price sheet. Sign in as an Enterprise Administrator and select **Reports** on the left navigation and select **Price Sheet** tab. Select the Download symbol in the top-right corner and find the corresponding Plan SKU part numbers with filter on column "Included Quantity" and select values greater than "0".
-Ensure that your OMS plan is showing on the price sheet under included units. If there are no included units for OMS plan on your enrollment, your OMS plan may be under another enrollment. Please contact Azure Enterprise Portal Support at [https://aka.ms/AzureEntSupport](https://aka.ms/AzureEntSupport).
+Ensure that your OMS plan is showing on the price sheet under included units. If there are no included units for OMS plan on your enrollment, your OMS plan may be under another enrollment. Contact Azure Enterprise Portal Support at [https://aka.ms/AzureEntSupport](https://aka.ms/AzureEntSupport).
-If the included units for the services on the price sheet don't match with what you have deployed, e.g. Operational Insights Premium Data Analyzed vs. Operational Insights Standard Data Analyzed, it means that you may have deployed services that are not covered by the plan, please contact Azure Enterprise Portal Support at [https://aka.ms/AzureEntSupport](https://aka.ms/AzureEntSupport) so we can assist you further.
+If the included units for the services on the price sheet don't match with what you have deployed, then you may have deployed services that aren't covered by the plan. For example, Operational Insights Premium Data Analyzed vs. Operational Insights Standard Data Analyzed. In this example, contact Azure Enterprise Portal Support at [https://aka.ms/AzureEntSupport](https://aka.ms/AzureEntSupport) so we can assist you further.
**Provisioned Plan SKU services on wrong enrollment**
-If you have multiple enrollments and have deployed services under the wrong enrollment number, which doesn't have an OMS plan, please contact Azure Enterprise Portal Support at [https://aka.ms/AzureEntSupport](https://aka.ms/AzureEntSupport).
+If you have multiple enrollments and have deployed services under the wrong enrollment number, which doesn't have an OMS plan, contact Azure Enterprise Portal Support at [https://aka.ms/AzureEntSupport](https://aka.ms/AzureEntSupport).
## Next steps
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 06/15/2022 Last updated : 06/23/2022
data-factory Connector Troubleshoot Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-snowflake.md
+
+ Title: Troubleshoot the Snowflake connector
+
+description: Learn how to troubleshoot issues with the Snowflake connector in Azure Data Factory and Azure Synapse Analytics.
++++ Last updated : 06/21/2022++++
+# Troubleshoot the Snowflake connector in Azure Data Factory and Azure Synapse
++
+This article provides suggestions to troubleshoot common problems with the Snowflake connector in Azure Data Factory and Azure Synapse.
+
+## Error message: IP % is not allowed to access Snowflake. Contact your local security administrator.
+
+- **Symptoms**: The copy activity fails with the following error:
+
+ `Job failed due to reason: net.snowflake.client.jdbc.SnowflakeSQLException: IP % is not allowed to access Snowflake.  Contact your local security administrator. `
+
+- **Cause**: It's a connectivity issue and usually caused by firewall IP issues when integration runtimes access your Snowflake.
+
+- **Recommendation**:
+
+ - If you configure a [self-hosted integration runtime](create-self-hosted-integration-runtime.md) to connect to Snowflake, make sure to add your self-hosted integration runtime IPs to the allowed list in Snowflake.
+ - If you use an Azure Integration Runtime and the access is restricted to IPs approved in the firewall rules, you can add [Azure Integration Runtime IPs](azure-integration-runtime-ip-addresses.md) to the allowed list in Snowflake.
+ - If you use a managed private endpoint and a network policy is in place on your Snowflake account, ensure Managed VNet CIDR is allowed. For more steps, refer to [How To: Set up a managed private endpoint from Azure Data Factory or Synapse to Snowflake](https://community.snowflake.com/s/article/How-to-set-up-a-managed-private-endpoint-from-Azure-Data-Factory-or-Synapse-to-Snowflake).
+
+## Error message: Failed to access remote file: access denied.
+
+- **Symptoms**: The copy activity fails with the following error:
+
+ `ERROR [42501] Failed to access remote file: access denied. Please check your credentials,Source =SnowflakeODBC_sb64.dll..`
+
+- **Cause**: The error pops up by the Snowflake COPY command and is caused by missing access permission on source/sink when execute Snowflake COPY commands.
+
+- **Recommendation**: Check your source/sink to make sure that you have granted proper access permission to Snowflake.
+
+ - Direct copy: Make sure to grant access permission to Snowflake in the other source/sink.
+ - Staged copy: The staging Azure Blob storage linked service must use shared access signature authentication. When you generate the shared access signature, make sure to set the allowed permissions and IP addresses to Snowflake in the staging Azure Blob storage. To learn more about this, see this [article](https://docs.snowflake.com/en/user-guide/data-load-azure-config.html#option-2-generating-a-sas-token).
+
+## Next steps
+
+For more troubleshooting help, try these resources:
+
+- [Connector troubleshooting guide](connector-troubleshoot-guide.md)
+- [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
+- [Data Factory feature requests](/answers/topics/azure-data-factory.html)
+- [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory)
+- [Microsoft Q&A page](/answers/topics/azure-data-factory.html)
+- [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
+- [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
Interactive authoring capabilities are used for functionalities like test connec
:::image type="content" source="./media/managed-vnet/interactive-authoring.png" alt-text="Screenshot that shows interactive authoring.":::
-## Time to live
+## Time to live (preview)
### Copy activity
data-factory Quickstart Create Data Factory Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-bicep.md
+
+ Title: Create an Azure Data Factory using Bicep
+description: Create a sample Azure Data Factory pipeline using Bicep.
++
+tags: azure-resource-manager
++++ Last updated : 06/17/2022++
+# Quickstart: Create an Azure Data Factory using Bicep
++
+This quickstart describes how to use Bicep to create an Azure data factory. The pipeline you create in this data factory **copies** data from one folder to another folder in an Azure blob storage. For a tutorial on how to **transform** data using Azure Data Factory, see [Tutorial: Transform data using Spark](transform-data-using-spark.md).
++
+> [!NOTE]
+> This article does not provide a detailed introduction of the Data Factory service. For an introduction to the Azure Data Factory service, see [Introduction to Azure Data Factory](introduction.md).
+
+## Prerequisites
+
+### Azure subscription
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/data-factory-v2-blob-to-blob-copy/).
++
+There are several Azure resources defined in the Bicep file:
+
+- [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts): Defines a storage account.
+- [Microsoft.DataFactory/factories](/azure/templates/microsoft.datafactory/factories): Create an Azure Data Factory.
+- [Microsoft.DataFactory/factories/linkedServices](/azure/templates/microsoft.datafactory/factories/linkedservices): Create an Azure Data Factory linked service.
+- [Microsoft.DataFactory/factories/datasets](/azure/templates/microsoft.datafactory/factories/datasets): Create an Azure Data Factory dataset.
+- [Microsoft.DataFactory/factories/pipelines](/azure/templates/microsoft.datafactory/factories/pipelines): Create an Azure Data Factory pipeline.
+
+## Create a file
+
+Open a text editor such as **Notepad**, and create a file named **emp.txt** with the following content:
+
+```emp.txt
+John, Doe
+Jane, Doe
+```
+
+Save the file locally. You'll use it later in the quickstart.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/data-factory-v2-blob-to-blob-copy/) as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure CLI or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+You can also use the Azure portal to review the deployed resources.
+
+1. Sign in to the Azure portal.
+1. Navigate to your resource group.
+1. You'll see your resources listed. Select each resource to see an overview.
+
+## Upload a file
+
+Use the Azure portal to upload the **emp.txt** file.
+
+1. Navigate to your resource group and select the storage account created. Then, select the **Containers** tab on the left panel.
+
+ :::image type="content" source="media/quickstart-create-data-factory-bicep/data-factory-containers-bicep.png" alt-text="Containers tab":::
+
+2. On the **Containers** page, select the blob container created. The name is in the format - blob\<uniqueid\>.
+
+ :::image type="content" source="media/quickstart-create-data-factory-bicep/data-factory-bicep-blob-container.png" alt-text="Blob container":::
+
+3. Select **Upload**, and then select the **Files** box icon in the right pane. Navigate to and select the **emp.txt** file that you created earlier.
+
+4. Expand the **Advanced** heading.
+
+5. In the **Upload to folder** box, enter *input*.
+
+6. Select the **Upload** button. You should see the **emp.txt** file and the status of the upload in the list.
+
+7. Select the **Close** icon (an **X**) to close the **Upload blob** page.
+
+ :::image type="content" source="media/quickstart-create-data-factory-bicep/data-factory-bicep-upload-blob-file.png" alt-text="Upload file to input folder":::
+
+Keep the container page open because you can use it to verify the output at the end of this quickstart.
+
+## Start trigger
+
+1. Navigate to the resource group page, and select the data factory you created.
+
+2. Select **Open** on the **Open Azure Data Factory Studio** tile.
+
+ :::image type="content" source="media/quickstart-create-data-factory-bicep/data-factory-open-tile-bicep.png" alt-text="Author & Monitor":::
+
+3. Select the **Author** tab :::image type="icon" source="media/quickstart-create-data-factory-bicep/data-factory-author-bicep.png" border="false":::.
+
+4. Select the pipeline created: **ArmtemplateSampleCopyPipeline**.
+
+ :::image type="content" source="media/quickstart-create-data-factory-bicep/data-factory-bicep-pipelines.png" alt-text="Bicep pipeline":::
+
+5. Select **Add Trigger** > **Trigger Now**.
+
+ :::image type="content" source="media/quickstart-create-data-factory-bicep/data-factory-trigger-now-bicep.png" alt-text="Trigger":::
+
+6. In the right pane under **Pipeline run**, select **OK**.
+
+## Monitor the pipeline
+
+1. Select the **Monitor** tab. :::image type="icon" source ="media/quickstart-create-data-factory-bicep/data-factory-monitor-bicep.png" border="false":::
+
+2. You see the activity runs associated with the pipeline run. In this quickstart, the pipeline only has one activity of type **Copy**. You should see a run for that activity.
+
+ :::image type="content" source="media/quickstart-create-data-factory-bicep/data-factory-bicep-successful-run.png" alt-text="Successful run":::
+
+## Verify the output file
+
+The pipeline automatically creates an output folder in the blob container. It then copies the **emp.txt** file from the input folder to the output folder.
+
+1. On the **Containers** page in the Azure portal, select **Refresh** to see the output folder.
+
+2. Select **output** in the folder list.
+
+3. Confirm that the **emp.txt** is copied to the output folder.
+
+ :::image type="content" source="media/quickstart-create-data-factory-bicep/data-factory-bicep-output.png" alt-text="Output":::
+
+## Clean up resources
+
+When no longer needed, use the Azure CLI or Azure PowerShell to delete the resource group and all of its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+You can also use the Azure portal to delete the resource group.
+
+1. In the Azure portal, navigate to your resource group.
+1. Select **Delete resource group**.
+1. A tab will appear. Enter the resource group name and select **Delete**.
+
+## Next steps
+
+In this quickstart, you created an Azure Data Factory using Bicep and validated the deployment. To learn more about Azure Data Factory and Bicep, continue on to the articles below.
+
+- [Azure Data Factory documentation](index.yml)
+- Learn more about [Bicep](../azure-resource-manager/bicep/overview.md)
data-lake-analytics Data Lake Analytics Manage Use Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-nodejs.md
description: This article describes how to use the Azure SDK for Node.js to mana
Previously updated : 12/05/2016 Last updated : 06/27/2022 # Manage Azure Data Lake Analytics using Azure SDK for Node.js
This article describes how to manage Azure Data Lake Analytics accounts, data so
The following versions are supported: * **Node.js version: 0.10.0 or higher** * **REST API version for Account: 2015-10-01-preview**
-* **REST API version for Catalog: 2015-10-01-preview**
-* **REST API version for Job: 2016-03-20-preview**
## Features * Account management: create, get, list, update, and delete.
-* Job management: submit, get, list, and cancel.
-* Catalog management: get and list.
## How to Install ```bash
-npm install azure-arm-datalake-analytics
+npm install @azure/arm-datalake-analytics
``` ## Authenticate using Azure Active Directory ```javascript
- var msrestAzure = require('ms-rest-azure');
- //user authentication
- var credentials = new msRestAzure.UserTokenCredentials('your-client-id', 'your-domain', 'your-username', 'your-password', 'your-redirect-uri');
+ const { DefaultAzureCredential } = require("@azure/identity");
//service principal authentication
- var credentials = new msRestAzure.ApplicationTokenCredentials('your-client-id', 'your-domain', 'your-secret');
+ var credentials = new DefaultAzureCredential();
``` ## Create the Data Lake Analytics client ```javascript
-var adlaManagement = require("azure-arm-datalake-analytics");
-var accountClient = new adlaManagement.DataLakeAnalyticsAccountClient(credentials, 'your-subscription-id');
-var jobClient = new adlaManagement.DataLakeAnalyticsJobClient(credentials, 'azuredatalakeanalytics.net');
-var catalogClient = new adlaManagement.DataLakeAnalyticsCatalogClient(credentials, 'azuredatalakeanalytics.net');
+const { DataLakeAnalyticsAccountManagementClient } = require("@azure/arm-datalake-analytics");
+var accountClient = new DataLakeAnalyticsAccountManagementClient(credentials, 'your-subscription-id');
``` ## Create a Data Lake Analytics account
var accountToCreate = {
} };
-client.account.create(resourceGroupName, accountName, accountToCreate, function (err, result, request, response) {
- if (err) {
- console.log(err);
+client.accounts.beginCreateAndWait(resourceGroupName, accountName, accountToCreate).then((result)=>{
+ console.log('result is: ' + util.inspect(result, {depth: null}));
+}).catch((err)=>{
+ console.log(err);
/*err has reference to the actual request and response, so you can see what was sent and received on the wire. The structure of err looks like this: err: {
client.account.create(resourceGroupName, accountName, accountToCreate, function
response: reference to a stripped version of the response } */
- } else {
- console.log('result is: ' + util.inspect(result, {depth: null}));
- }
-});
-```
-
-## Get a list of jobs
-```javascript
-var util = require('util');
-var accountName = 'testadlaacct';
-jobClient.job.list(accountName, function (err, result, request, response) {
- if (err) {
- console.log(err);
- } else {
- console.log('result is: ' + util.inspect(result, {depth: null}));
- }
-});
+})
```
-## Get a list of databases in the Data Lake Analytics Catalog
-```javascript
-var util = require('util');
-var accountName = 'testadlaacct';
-catalogClient.catalog.listDatabases(accountName, function (err, result, request, response) {
- if (err) {
- console.log(err);
- } else {
- console.log('result is: ' + util.inspect(result, {depth: null}));
- }
-});
-```
## See also
-* [Microsoft Azure SDK for Node.js](https://github.com/azure/azure-sdk-for-node)
+* [Microsoft Azure SDK for Node.js](https://github.com/Azure/azure-sdk-for-js)
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | |--|--|:-:|--|
-| **Attempt to create a new Linux namespace from a container detected (Preview)**<br>(K8S.NODE_NamespaceCreation) | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Medium |
-| **A file was downloaded and executed (Preview)**<br>(K8S.NODE_LinuxSuspiciousActivity) | Analysis of processes running within a container indicates that a file has been downloaded to the container, given execution privileges and then executed. | Execution | Medium |
-| **A history file has been cleared (Preview)**<br>(K8S.NODE_HistoryFileCleared) | Analysis of processes running within a container or directly on a Kubernetes node, has detected that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium |
+| **Attempt to create a new Linux namespace from a container detected**<br>(K8S.NODE_NamespaceCreation) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Medium |
+| **A history file has been cleared**<br>(K8S.NODE_HistoryFileCleared) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium |
| **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiAcitivty) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium | | **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium |
-| **An uncommon connection attempt detected (Preview)**<br>(K8S.NODE_SuspectConnection) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
-| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium |
-| **Attempt to stop apt-daily-upgrade.timer service detected (Preview)**<br>(K8S.NODE_TimerServiceDisabled) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational |
+| **An uncommon connection attempt detected**<br>(K8S.NODE_SuspectConnection) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
+| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium |
+| **Attempt to stop apt-daily-upgrade.timer service detected**<br>(K8S.NODE_TimerServiceDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational |
| **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium |
-| **Behavior similar to Fairware ransomware detected (Preview)**<br>(K8S.NODE_FairwareMalware) | Analysis of processes running within a container or directly on a Kubernetes node, has detected execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium |
-| **Command within a container running with high privileges (Preview)**<br>(K8S.NODE_PrivilegedExecutionInContainer) | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low |
-| **Container running in privileged mode (Preview)**<br>(K8S.NODE_PrivilegedContainerArtifacts) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a Docker command that is running a privileged container. The privileged container has full access to the hosting pod or host resource. If compromised, an attacker may use the privileged container to gain access to the hosting pod or host. | PrivilegeEscalation, Execution | Low |
+| **Behavior similar to Fairware ransomware detected**<br>(K8S.NODE_FairwareMalware) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium |
+| **Command within a container running with high privileges**<br>(K8S.NODE_PrivilegedExecutionInContainer) <sup>[1](#footnote1)</sup> | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low |
+| **Container running in privileged mode**<br>(K8S.NODE_PrivilegedContainerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a Docker command that is running a privileged container. The privileged container has full access to the hosting pod or host resource. If compromised, an attacker may use the privileged container to gain access to the hosting pod or host. | PrivilegeEscalation, Execution | Low |
| **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium |
-| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[1](#footnote1)</sup> <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low |
-| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
-| **Detected file download from a known malicious source (Preview)**<br>(K8S.NODE_SuspectDownload) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium |
-| **Detected Persistence Attempt (Preview)**<br>(K8S.NODE_NewSingleUserModeStartupScript) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode so it may indicate an attacker has added a malicious process to every run-level to guarantee persistence. | Persistence | Medium |
-| **Detected suspicious file download (Preview)**<br>(K8S.NODE_SuspectDownloadArtifacts) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious download of a remote file. | Persistence | Low |
-| **Detected suspicious use of the nohup command (Preview)**<br>(K8S.NODE_SuspectNohup) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It is rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium |
-| **Detected suspicious use of the useradd command (Preview)**<br>(K8S.NODE_SuspectUserAddition) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the useradd command. | Persistence | Medium |
-| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High |
-| **Digital currency mining related behavior detected (Preview)**<br>(K8S.NODE_DigitalCurrencyMining) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an execution of a process or command normally associated with digital currency mining. | Execution | High |
-| **Docker build operation detected on a Kubernetes node (Preview)**<br>(K8S.NODE_ImageBuildOnNode) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low |
-| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) <sup>[3](#footnote3)</sup> | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low |
+| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[2](#footnote2)</sup> <sup>[4](#footnote4)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low |
+| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[4](#footnote4)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
+| **Detected file download from a known malicious source**<br>(K8S.NODE_SuspectDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium |
+| **Detected suspicious file download**<br>(K8S.NODE_SuspectDownloadArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious download of a remote file. | Persistence | Low |
+| **Detected suspicious use of the nohup command**<br>(K8S.NODE_SuspectNohup) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It is rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium |
+| **Detected suspicious use of the useradd command**<br>(K8S.NODE_SuspectUserAddition) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the useradd command. | Persistence | Medium |
+| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High |
+| **Digital currency mining related behavior detected**<br>(K8S.NODE_DigitalCurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an execution of a process or command normally associated with digital currency mining. | Execution | High |
+| **Docker build operation detected on a Kubernetes node**<br>(K8S.NODE_ImageBuildOnNode) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low |
+| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) <sup>[4](#footnote4)</sup> | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low |
| **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised system. | Execution | Medium |
-| **Execution of hidden file (Preview)**<br>(K8S.NODE_ExecuteHiddenFile) | Analysis of processes running within a container or directly on a Kubernetes node, has detected that a hidden file was executed by the specified user account. | Persistence, DefenseEvasion | Informational |
-| **Exposed Docker daemon on TCP socket (Preview)**<br>(K8S.NODE_ExposedDocker) | Analysis of processes running within a container or directly on a Kubernetes node, has detected that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port. | Execution, Exploitation | Medium |
| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: https://aka.ms/exposedkubeflow-blog | Initial Access | Medium | | **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High | | **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium | | **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low |
-| **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) <sup>[2](#footnote2)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
-| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
-| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[1](#footnote1)</sup> <sup>[3](#footnote3)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
+| **Indicators associated with DDOS toolkit detected**<br>(K8S.NODE_KnownLinuxDDoSToolkit) <sup>[1](#footnote1)</sup> <sup>[3](#footnote3)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
+| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[4](#footnote4)</sup> | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
+| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[2](#footnote2)</sup> <sup>[4](#footnote4)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
-| **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium |
-| **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
-| **Microsoft Defender for Cloud test alert (not a threat). (Preview)**<br>(K8S.NODE_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High |
-| **MITRE Caldera agent detected (Preview)**<br>(K8S.NODE_MitreCalderaTools) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious process. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
-| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
-| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
-| **Possible attack tool detected (Preview)**<br>(K8S.NODE_KnownLinuxAttackTool) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious tool invocation. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium |
-| **Possible backdoor detected (Preview)**<br>(K8S.NODE_LinuxBackdoorArtifact) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium |
-| **Possible command line exploitation attempt (Preview)**<br>(K8S.NODE_ExploitAttempt) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
-| **Possible credential access tool detected (Preview)**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium |
-| **Possible Cryptocoinminer download detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerDownload) | Analysis of processes running within a container or directly on a Kubernetes node, has detected download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium |
-| **Possible data exfiltration detected (Preview)**<br>(K8S.NODE_DataEgressArtifacts) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium |
-| **Possible Log Tampering Activity Detected (Preview)**<br>(K8S.NODE_SystemLogRemoval) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium |
-| **Possible password change using crypt-method detected (Preview)**<br>(K8S.NODE_SuspectPasswordChange) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium |
-| **Potential overriding of common files (Preview)**<br>(K8S.NODE_OverridingCommonFiles) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an override for common files as a way to obfuscate actions or for persistence. | Persistence | Medium |
-| **Potential port forwarding to external IP address (Preview)**<br>(K8S.NODE_SuspectPortForwarding) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium |
-| **Potential reverse shell detected (Preview)**<br>(K8S.NODE_ReverseShell) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium |
+| **Manipulation of host firewall detected**<br>(K8S.NODE_FirewallDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
+| **Microsoft Defender for Cloud test alert (not a threat).**<br>(K8S.NODE_EICAR) <sup>[1](#footnote1)</sup> | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High |
+| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
+| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[4](#footnote4)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **Possible attack tool detected**<br>(K8S.NODE_KnownLinuxAttackTool) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious tool invocation. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium |
+| **Possible backdoor detected**<br>(K8S.NODE_LinuxBackdoorArtifact) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium |
+| **Possible command line exploitation attempt**<br>(K8S.NODE_ExploitAttempt) <sup>[1](#footnote1)</sup>| Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
+| **Possible credential access tool detected**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) <sup>[1](#footnote1)</sup>| Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium |
+| **Possible Cryptocoinminer download detected**<br>(K8S.NODE_CryptoCoinMinerDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium |
+| **Possible data exfiltration detected**<br>(K8S.NODE_DataEgressArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium |
+| **Possible Log Tampering Activity Detected**<br>(K8S.NODE_SystemLogRemoval) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium |
+| **Possible password change using crypt-method detected**<br>(K8S.NODE_SuspectPasswordChange) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium |
+| **Potential port forwarding to external IP address**<br>(K8S.NODE_SuspectPortForwarding) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium |
+| **Potential reverse shell detected**<br>(K8S.NODE_ReverseShell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium |
| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the nodeΓÇÖs resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low |
-| **Process associated with digital currency mining detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerArtifacts) | Analysis of processes running within a container or directly on a Kubernetes node, has detected execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium |
-| **Process seen accessing the SSH authorized keys file in an unusual way (Preview)**<br>(K8S.NODE_SshKeyAccess) | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low |
+| **Process associated with digital currency mining detected**<br>(K8S.NODE_CryptoCoinMinerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium |
+| **Process seen accessing the SSH authorized keys file in an unusual way**<br>(K8S.NODE_SshKeyAccess) <sup>[1](#footnote1)</sup> | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low |
| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
-| **Screenshot taken on host (Preview)**<br>(K8S.NODE_KnownLinuxScreenshotTool) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the use of a screen capture tool. This isn't a common usage scenario for containers and could be part of attackers attempt to access private data. | Collection | Low |
-| **Script extension mismatch detected (Preview)**<br>(K8S.NODE_MismatchedScriptFeatures) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. | DefenseEvasion | Medium |
-| **Security-related process termination detected (Preview)**<br>(K8S.NODE_SuspectProcessTermination) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
-| **SSH server is running inside a container (Preview) (Preview)**<br>(K8S.NODE_ContainerSSH) | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium |
-| **Suspicious compilation detected (Preview)**<br>(K8S.NODE_SuspectCompilation) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious compilation. Attackers will often compile exploits to escalate privileges. | PrivilegeEscalation, Exploitation | Medium |
-| **Suspicious file timestamp modification (Preview)**<br>(K8S.NODE_TimestampTampering) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low |
-| **Suspicious request to Kubernetes API (Preview)**<br>(K8S.NODE_KubernetesAPI) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium |
-| **Suspicious request to the Kubernetes Dashboard (Preview)**<br>(K8S.NODE_KubernetesDashboard) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | Execution | Medium |
-| **Potential crypto coin miner started (Preview)**<br>(K8S.NODE_CryptoCoinMinerExecution) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a process being started in a way normally associated with digital currency mining. | Execution | Medium |
-| **Suspicious password access (Preview)**<br>(K8S.NODE_SuspectPasswordFileAccess) | Analysis of processes running within a container or directly on a Kubernetes node, has detected suspicious attempt to access encrypted user passwords. | Persistence | Informational |
-| **Suspicious use of DNS over HTTPS (Preview)**<br>(K8S.NODE_SuspiciousDNSOverHttps) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
-| **A possible connection to malicious location has been detected. (Preview)**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) | Analysis of processes running within a container or directly on a Kubernetes node, has detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occured. | InitialAccess | Medium |
--
-<sup><a name="footnote1"></a>1</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, are not supported for GKE clusters.
-
-<sup><a name="footnote2"></a>2</sup>: This alert is supported on Windows nodes/containers.
-
-<sup><a name="footnote3"></a>3</sup>: Control plane alert (OS agnostic).
+| **Security-related process termination detected**<br>(K8S.NODE_SuspectProcessTermination) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
+| **SSH server is running inside a container**<br>(K8S.NODE_ContainerSSH) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium |
+| **Suspicious file timestamp modification**<br>(K8S.NODE_TimestampTampering) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low |
+| **Suspicious request to Kubernetes API**<br>(K8S.NODE_KubernetesAPI) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium |
+| **Suspicious request to the Kubernetes Dashboard**<br>(K8S.NODE_KubernetesDashboard) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium |
+| **Potential crypto coin miner started**<br>(K8S.NODE_CryptoCoinMinerExecution) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a process being started in a way normally associated with digital currency mining. | Execution | Medium |
+| **Suspicious password access**<br>(K8S.NODE_SuspectPasswordFileAccess) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected suspicious attempt to access encrypted user passwords. | Persistence | Informational |
+| **Suspicious use of DNS over HTTPS**<br>(K8S.NODE_SuspiciousDNSOverHttps) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
+| **A possible connection to malicious location has been detected.**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occured. | InitialAccess | Medium |
+| **Possible malicious web shell detected.**<br>(K8S.NODE_Webshell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected a possible web shell. Attackers will often upload a web shell to a compute resource they have compromised to gain persistence or for further exploitation. | Persistence, Exploitation | Medium |
+| **Burst of multiple reconnaissance commands could indicate initial activity after compromise**<br>(K8S.NODE_ReconnaissanceArtifactsBurst) <sup>[1](#footnote1)</sup> | Analysis of host/device data detected execution of multiple reconnaissance commands related to gathering system or host details performed by attackers after initial compromise. | Discovery, Collection | Low |
+| **Suspicious Download Then Run Activity**<br>(K8S.NODE_DownloadAndRunCombo) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a file being downloaded then run in the same command. While this isn't always malicious, this is a very common technique attackers use to get malicious files onto victim machines. | Execution, CommandAndControl, Exploitation | Medium |
+| **Digital currency mining activity**<br>(K8S.NODE_CurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of DNS transactions detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | Low |
+| **Access to kubelet kubeconfig file detected**<br>(K8S.NODE_KubeConfigAccess) <sup>[1](#footnote1)</sup> | Analysis of processes running on a Kubernetes cluster node detected access to kubeconfig file on the host. The kubeconfig file, normally used by the Kubelet process, contains credentials to the Kubernetes cluster API server. Access to this file is often associated with attackers attempting to access those credentials, or with security scanning tools which check if the file is accessible. | CredentialAccess | Medium |
+| **Access to cloud metadata service detected**<br>(K8S.NODE_ImdsCall) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected access to the cloud metadata service for acquiring identity token. The container doesn't normally perform such operation. While this behavior might be legitimate, attackers might use this technique to access cloud resources after gaining initial access to a running container. | CredentialAccess | Medium |
+
+<sup><a name="footnote1"></a>1</sup>: **Preview for non-AKS clusters**: This alert is generally available for AKS clusters, but it is in preview for other environments, such as Azure Arc, EKS and GKE.
+
+<sup><a name="footnote2"></a>2</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, are not supported for GKE clusters.
+
+<sup><a name="footnote3"></a>3</sup>: This alert is supported on Windows nodes/containers.
+
+<sup><a name="footnote4"></a>4</sup>: Control plane alert (OS agnostic).
## <a name="alerts-sql-db-and-warehouse"></a>Alerts for SQL Database and Azure Synapse Analytics
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Title: Microsoft Defender for Servers - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Servers. Previously updated : 06/15/2022 Last updated : 06/26/2022 # Overview of Microsoft Defender for Servers
To protect machines in hybrid and multicloud environments, Defender for Cloud us
> [!TIP] > For details of which Defender for Servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers).
-You can learn more from the product manager about Defender for Servers, by watching [Microsoft Defender for Servers](episode-five.md). You can also watch [Enhanced workload protection features in Defender for Servers](episode-twelve.md).
+You can learn more from the product manager about Defender for Servers, by watching [Microsoft Defender for Servers](episode-five.md). You can also watch [Enhanced workload protection features in Defender for Servers](episode-twelve.md), or learn how to [deploy in Defender for Servers in AWS and GCP](episode-fourteen.md).
## What are the Microsoft Defender for server plans?
defender-for-cloud Episode Fourteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-fourteen.md
+
+ Title: Defender for Servers deployment in AWS and GCP
+description: Learn about the capabilities available for Defender for Servers deployment within AWS and GCP.
+ Last updated : 06/26/2022++
+# Defender for Servers deployment in AWS and GCP
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Ortal Parpara joins Yuri Diogenes to talk about the options to deploy Defender for Servers in AWS and GCP. Ortal talks about the new capability that allows you to select a different Defender for Server plan per connector, demonstrates how to customize the deployment and how this feature helps to deploy Azure Arc.
+
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=2426d341-bdb6-4795-bc08-179cfe7b99ba" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [00:00](/shows/mdc-in-the-field/defenders-for-servers-deploy-aws-gcp#time=00m00s) - Introduction
+
+- [01:30](/shows/mdc-in-the-field/defenders-for-servers-deploy-aws-gcp#time=01m30s) - Selecting the appropriate plan for AWS and GCP
+
+- [03:05](/shows/mdc-in-the-field/defenders-for-servers-deploy-aws-gcp#time=03m05s) - Is it necessary to make any action to apply this change?
+
+- [03:23](/shows/mdc-in-the-field/defenders-for-servers-deploy-aws-gcp#time=03m23s) - Supported scenarios
+
+- [03:40](/shows/mdc-in-the-field/defenders-for-servers-deploy-aws-gcp#time=03m40s) - What changes should you expect to see on your environment?
+
+- [05:49](/shows/mdc-in-the-field/defenders-for-servers-deploy-aws-gcp#time=05m49s) - Demonstration
+
+## Recommended resources
+
+[Enhanced workload protection features in Defender for Servers](episode-twelve.md).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Thirteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirteen.md
Last updated 06/16/2022
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Defender for Servers deployment in AWS and GCP](episode-fourteen.md)
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
Last updated 06/19/2022
# Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint
-Microsoft Defender for Endpoint is a holistic, cloud-delivered, endpoint security solution. Its main features are:
+With Microsoft Defender for Servers, you can deploy [Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) to your server resources. Microsoft Defender for Endpoint is a holistic, cloud-delivered, endpoint security solution. Its main features are:
- Risk-based vulnerability management and assessment - Attack surface reduction
Microsoft Defender for Endpoint is a holistic, cloud-delivered, endpoint securit
## Benefits of integrating Microsoft Defender for Endpoint with Defender for Cloud
-Microsoft Defender for Endpoint protects your Windows and Linux machines whether they're hosted in Azure, hybrid clouds (on-premises), or AWS. Protections include:
+[Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) protects your Windows and Linux machines whether they're hosted in Azure, hybrid clouds (on-premises), or AWS. Protections include:
- **Advanced post-breach detection sensors**. Defender for Endpoint's sensors collect a vast array of behavioral signals from your machines.
Confirm that your machine meets the necessary requirements for Defender for Endp
> [!IMPORTANT] > Defender for Cloud's integration with Microsoft Defender for Endpoint is enabled by default. So when you enable enhanced security features, you give consent for Microsoft Defender for Servers to access the Microsoft Defender for Endpoint data related to vulnerabilities, installed software, and alerts for your endpoints.
-1. For Windows servers, make sure that your servers meet the requirements for [onboarding Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide#windows-server-2012-r2-and-windows-server-2016)
+1. For Windows servers, make sure that your servers meet the requirements for [onboarding Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/configure-server-endpoints#windows-server-2012-r2-and-windows-server-2016)
1. If you've moved your subscription between Azure tenants, some manual preparatory steps are also required. For full details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
Confirm that your machine meets the necessary requirements for Defender for Endp
### [**Windows**](#tab/windows)
-[The new MDE unified solution](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution) doesn't use or require installation of the Log Analytics agent. The unified solution is automatically deployed for all Windows servers connected through Azure Arc and multicloud servers connected through the multicloud connectors, except for Windows 2012 R2 and 2016 servers on Azure that are protected by Defender for Servers Plan 2. You can choose to deploy the MDE unified solution to those machines.
+[The new MDE unified solution](/microsoft-365/security/defender-endpoint/configure-server-endpoints#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution) doesn't use or require installation of the Log Analytics agent. The unified solution is automatically deployed for all Windows servers connected through Azure Arc and multicloud servers connected through the multicloud connectors, except for Windows 2012 R2 and 2016 servers on Azure that are protected by Defender for Servers Plan 2. You can choose to deploy the MDE unified solution to those machines.
You'll deploy Defender for Endpoint to your Windows machines in one of two ways - depending on whether you've already deployed it to your Windows machines:
To remove the Defender for Endpoint solution from your machines:
1. Remove the MDE.Windows/MDE.Linux extension from the machine.
-1. Follow the steps in [Offboard devices from the Microsoft Defender for Endpoint service](/microsoft-365/security/defender-endpoint/offboard-machines?view=o365-worldwide&preserve-view=true) from the Defender for Endpoint documentation.
+1. Follow the steps in [Offboard devices from the Microsoft Defender for Endpoint service](/microsoft-365/security/defender-endpoint/offboard-machines) from the Defender for Endpoint documentation.
## FAQ - Microsoft Defender for Cloud integration with Microsoft Defender for Endpoint
The discount will be effective starting from the approval date, and won't take p
### How do I switch from a third-party EDR tool? Full instructions for switching from a non-Microsoft endpoint solution are available in the Microsoft Defender for Endpoint documentation: [Migration overview](/windows/security/threat-protection/microsoft-defender-atp/switch-to-microsoft-defender-migration).
-<!-- ### Which Microsoft Defender for Endpoint plan is supported in Defender for Servers?
+### Which Microsoft Defender for Endpoint plan is supported in Defender for Servers?
-Defender for Servers Plan 1 provides the capabilities of [Microsoft Defender for Endpoint Plan 1](/microsoft-365/security/defender-endpoint/defender-endpoint-plan-1?view=o365-worldwide). Defender for Servers Plan 2 provides the capabilities of [Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide). -->
+Defender for Servers Plan 1 and Plan 2 provides the capabilities of [Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint). -->
## Next steps
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
Title: Workflow automation in Microsoft Defender for Cloud | Microsoft Docs description: Learn how to create and automate workflows in Microsoft Defender for Cloud Previously updated : 11/09/2021 Last updated : 06/26/2022 # Automate responses to Microsoft Defender for Cloud triggers
This article describes the workflow automation feature of Microsoft Defender for
|-|:-| |Release state:|General availability (GA)| |Pricing:|Free|
-|Required roles and permissions:|**Security admin role** or **Owner** on the resource group<br>Must also have write permissions for the target resource<br><br>To work with Azure Logic Apps workflows, you must also have the following Logic Apps roles/permissions:<br> - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones)<br> - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for Logic App creation and modification<br>If you want to use Logic App connectors, you may need additional credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances)|
+|Required roles and permissions:|**Security admin role** or **Owner** on the resource group<br>Must also have write permissions for the target resource<br><br>To work with Azure Logic Apps workflows, you must also have the following Logic Apps roles/permissions:<br> - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones)<br> - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for Logic App creation and modification<br>If you want to use Logic App connectors, you may need other credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances)|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)|
This article describes the workflow automation feature of Microsoft Defender for
:::image type="content" source="./media/workflow-automation/list-of-workflow-automations.png" alt-text="Screenshot of workflow automation page showing the list of defined automations." lightbox="./media/workflow-automation/list-of-workflow-automations.png":::
- From this page you can create new automation rules, as well as enable, disable, or delete existing ones.
+ From this page you can create new automation rules, enable, disable, or delete existing ones.
-1. To define a new workflow, click **Add workflow automation**. The options pane for your new automation opens.
+1. To define a new workflow, select **Add workflow automation**. The options pane for your new automation opens.
- :::image type="content" source="./media/workflow-automation/add-workflow.png" alt-text="Add workflow automations pane.":::
+ :::image type="content" source="./media/workflow-automation/add-workflow.png" alt-text="Add workflow automations pane." lightbox="media/workflow-automation/add-workflow.png":::
Here you can enter: 1. A name and description for the automation.
This article describes the workflow automation feature of Microsoft Defender for
1. From the Actions section, select **visit the Logic Apps page** to begin the Logic App creation process.
+ :::image type="content" source="media/workflow-automation/visit-logic.png" alt-text="Screenshot that shows where on the screen you need to select the visit the logic apps page in the actions section of the add workflow automation screen." border="true":::
+ You'll be taken to Azure Logic Apps.
-1. Select **Add**.
+1. Select **(+) Add**.
- [![Creating a new Logic App.](media/workflow-automation/logic-apps-create-new.png)](media/workflow-automation/logic-apps-create-new.png#lightbox)
+ :::image type="content" source="media/workflow-automation/logic-apps-create-new.png" alt-text="Screenshot of the create a logic app screen." lightbox="media/workflow-automation/logic-apps-create-new.png":::
-1. Enter a name, resource group, and location, and select **Review and create** > **Create**.
+1. Fill out all required fields and select **Review + Create**.
The message **Deployment is in progress** appears. Wait for the deployment complete notification to appear and select **Go to resource** from the notification.
-1. In your new logic app, you can choose from built-in, predefined templates from the security category. Or you can define a custom flow of events to occur when this process is triggered.
+1. Review the information you entered and select **Create**.
+
+ In your new logic app, you can choose from built-in, predefined templates from the security category. Or you can define a custom flow of events to occur when this process is triggered.
> [!TIP] > Sometimes in a logic app, parameters are included in the connector as part of a string and not in their own field. For an example of how to extract parameters, see step #14 of [Working with logic app parameters while building Microsoft Defender for Cloud workflow automations](https://techcommunity.microsoft.com/t5/azure-security-center/working-with-logic-app-parameters-while-building-azure-security/ba-p/1342121).
- The logic app designer supports these Defender for Cloud triggers:
+ The logic app designer supports the following Defender for Cloud triggers:
- **When a Microsoft Defender for Cloud Recommendation is created or triggered** - If your logic app relies on a recommendation that gets deprecated or replaced, your automation will stop working and you'll need to update the trigger. To track changes to recommendations, use the [release notes](release-notes.md).
This article describes the workflow automation feature of Microsoft Defender for
[![Sample logic app.](media/workflow-automation/sample-logic-app.png)](media/workflow-automation/sample-logic-app.png#lightbox)
-1. After you've defined your logic app, return to the workflow automation definition pane ("Add workflow automation"). Click **Refresh** to ensure your new Logic App is available for selection.
+1. After you've defined your logic app, return to the workflow automation definition pane ("Add workflow automation"). Select **Refresh** to ensure your new Logic App is available for selection.
![Refresh.](media/workflow-automation/refresh-the-list-of-logic-apps.png)
-1. Select your logic app and save the automation. Note that the Logic App dropdown only shows Logic Apps with supporting Defender for Cloud connectors mentioned above.
+1. Select your logic app and save the automation. The Logic App dropdown only shows Logic Apps with supporting Defender for Cloud connectors mentioned above.
## Manually trigger a Logic App You can also run Logic Apps manually when viewing any security alert or recommendation.
-To manually run a Logic App, open an alert or a recommendation and click **Trigger Logic App**:
+To manually run a Logic App, open an alert or a recommendation and select **Trigger Logic App**:
[![Manually trigger a Logic App.](media/workflow-automation/manually-trigger-logic-app.png)](media/workflow-automation/manually-trigger-logic-app.png#lightbox)
To implement these policies:
1. Open each tab and set the parameters as desired: 1. In the **Basics** tab, set the scope for the policy. To use centralized management, assign the policy to the Management Group containing the subscriptions that will use the workflow automation configuration.
- 1. In the **Parameters** tab, set the resource group and data type details.
- > [!TIP]
- > Each parameter has a tooltip explaining the options available to you.
- >
- > Azure Policy's parameters tab (1) provides access to similar configuration options as Defender for Cloud's workflow automation page (2).
- > :::image type="content" source="./media/workflow-automation/azure-policy-next-to-workflow-automation.png" alt-text="Comparing the parameters in workflow automation with Azure Policy." lightbox="./media/workflow-automation/azure-policy-next-to-workflow-automation.png":::
+ 1. In the Parameters tab, enter the required information.
- 1. Optionally, to apply this assignment to existing subscriptions, open the **Remediation** tab and select the option to create a remediation task.
+ :::image type="content" source="media/workflow-automation/parameters-tab.png" alt-text="Screenshot of the parameters tab.":::
-1. Review the summary page and select **Create**.
+ 1. (Optional), Apply this assignment to an existing subscription in the **Remediation** tab and select the option to create a remediation task.
+1. Review the summary page and select **Create**.
## Data types schemas
-To view the raw event schemas of the security alerts or recommendations events passed to the Logic App instance, visit the [Workflow automation data types schemas](https://aka.ms/ASCAutomationSchemas). This can be useful in cases where you are not using Defender for Cloud's built-in Logic App connectors mentioned above, but instead are using Logic App's generic HTTP connector - you could use the event JSON schema to manually parse it as you see fit.
+To view the raw event schemas of the security alerts or recommendations events passed to the Logic App instance, visit the [Workflow automation data types schemas](https://aka.ms/ASCAutomationSchemas). This can be useful in cases where you aren't using Defender for Cloud's built-in Logic App connectors mentioned above, but instead are using Logic App's generic HTTP connector - you could use the event JSON schema to manually parse it as you see fit.
## FAQ - Workflow automation
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
Malware engine alerts describe detected malicious network activity.
| Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major | | Suspicion of Conficker Malware | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | | Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. | Critical |
-| Suspicion of Malicious Activity | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
+| Suspicion of Malicious Activity | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major |
| Suspicion of Malicious Activity (BlackEnergy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | | Suspicion of Malicious Activity (DarkComet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | | Suspicion of Malicious Activity (Duqu) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
Malware engine alerts describe detected malicious network activity.
| Suspicion of Remote Code Execution with PsExec | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | | Suspicion of Remote Windows Service Management | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | | Suspicious Executable File Detected on Endpoint | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major |
-| Suspicious Traffic Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical |
+| Suspicious Traffic Detected | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical |
| Backup Activity with Antivirus Signatures | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | ## Operational engine alerts
Operational engine alerts describe detected operational incidents, or malfunctio
## Next steps You can [Manage alert events](how-to-manage-the-alert-event.md).
-Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
+Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
By default, each sensor and on-premises management console is installed with a *
## Role-based permissions The following user roles are available: -- **Read only**: Read-only users perform tasks such as viewing alerts and devices on the device map. These users have access to options displayed under **Navigation**.
+- **Read only**: Read-only users perform tasks such as viewing alerts and devices on the device map. These users have access to options displayed under **Discover**.
-- **Security analyst**: Security Analysts have Read-only user permissions. They can also perform actions on devices, acknowledge alerts, and use investigation tools. These users have access to options displayed under **Navigation** and **Analysis**.
+- **Security analyst**: Security Analysts have Read-only user permissions. They can also perform actions on devices, acknowledge alerts, and use investigation tools. These users have access to options displayed under **Discover** and **Analyze**.
- **Administrator**: Administrators have access to all tools, including system configurations, creating and managing users, and more. These users have access to options displayed under **Discover**, **Analyze**, and **Manage** sections of the console main screen.
digital-twins How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-move-regions.md
In this section, you'll prepare to recreate your instance by downloading your or
### Download models, twins, and graph with Azure Digital Twins Explorer
-First, open **Azure Digital Twins Explorer** for your Azure Digital Twins instance in the [Azure portal](https://portal.azure.com). To do so, navigate to the Azure Digital Twins instance in the portal by searching for its name in the portal search bar. Then, select the **Go to Explorer (Preview)** button.
+First, open **Azure Digital Twins Explorer** for your Azure Digital Twins instance in the [Azure portal](https://portal.azure.com). To do so, navigate to the Azure Digital Twins instance in the portal by searching for its name in the portal search bar. Then, select the **Open Azure Digital Twins Explorer (preview)** button.
Selecting this button will open an Azure Digital Twins Explorer window connected to this instance.
event-hubs Event Hubs For Kafka Ecosystem Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-for-kafka-ecosystem-overview.md
Title: Use event hub from Apache Kafka app - Azure Event Hubs | Microsoft Docs description: This article provides information on Apache Kafka support by Azure Event Hubs. - Previously updated : 08/30/2021+ Last updated : 06/27/2022 + # Use Azure Event Hubs from Apache Kafka applications Event Hubs provides an endpoint compatible with the Apache Kafka® producer and consumer APIs that can be used by most existing Apache Kafka client applications as an alternative to running your own Apache Kafka cluster. Event Hubs supports Apache Kafka's producer and consumer APIs clients at version 1.0 and above.
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
If you are remote and do not have fiber connectivity or you want to explore othe
| **Rogers** | Cologix, Equinix | Montreal, Toronto | | **[Spectrum Enterprise](https://enterprise.spectrum.com/services/cloud/cloud-connect.html)** | Equinix | Chicago, Dallas, Los Angeles, New York, Silicon Valley | | **[Tamares Telecom](http://www.tamarestelecom.com/our-services/#Connectivity)** | Equinix | London |
-| **[Tata Teleservices](https://www.tatateleservices.com/business-services/data-services/secure-cloud-connect)** | Tata Communications | Chennai, Mumbai |
+| **[Tata Teleservices](https://www.tatatelebusiness.com/data-services/ez-cloud-connect/)** | Tata Communications | Chennai, Mumbai |
| **[TDC Erhverv](https://tdc.dk/Produkter/cloudaccessplus)** | Equinix | Amsterdam | | **[Telecom Italia Sparkle](https://www.tisparkle.com/our-platform/corporate-platform/sparkle-cloud-connect#catalogue)**| Equinix | Amsterdam | | **[Telekom Deutschland GmbH](https://cloud.telekom.de/de/infrastruktur/managed-it-services/managed-hybrid-infrastructure-mit-microsoft-azure)** | Interxion | Amsterdam, Frankfurt |
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
Title: Details of the policy definition structure description: Describes how policy definitions are used to establish conventions for Azure resources in your organization. Previously updated : 09/01/2021 Last updated : 06/27/2022 ++ # Azure Policy definition structure
The following Resource Provider modes are fully supported:
definitions, see [Integrate Azure Key Vault with Azure Policy](../../../key-vault/general/azure-policy.md).
-The following Resource Provider mode is currently supported as a **preview**:
+The following Resource Provider modes are currently supported as a **[preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)**:
-- `Microsoft.ContainerService.Data` for managing admission controller rules on
- [Azure Kubernetes Service](../../../aks/intro-kubernetes.md). Definitions using this Resource
- Provider mode **must** use the [EnforceRegoPolicy](./effects.md#enforceregopolicy) effect. This
- mode is _deprecated_.
+- `Microsoft.Network.Data` for managing [Azure Virtual Network Manager](../../../virtual-network-manager/overview.md) custom membership policies using Azure Policy.
+- `Microsoft.Kubernetes.Data` for Azure Policy components that target [Azure Kubernetes Service (AKS)](../../../aks/intro-kubernetes.md) resources such as pods, namespaces, and ingresses.
> [!NOTE] >Unless explicitly stated, Resource Provider modes only support built-in policy definitions, and exemptions are not supported at the component-level.
_common_ properties used by Azure Policy and in built-ins. Each `metadata` prope
- `preview` (boolean): True or false flag for if the policy definition is _preview_. - `deprecated` (boolean): True or false flag for if the policy definition has been marked as _deprecated_.-- `portalReview` (string): Determines whether parameters should be reviewed in the portal, regardless of the required input.
+- `portalReview` (string): Determines whether parameters should be reviewed in the portal, regardless of the required input.
> [!NOTE] > The Azure Policy service uses `version`, `preview`, and `deprecated` properties to convey level of
hdinsight Hdinsight Hadoop R Scaler Sparkr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-r-scaler-sparkr.md
rxDataStep(weatherDF, outFile = weatherDF1, rowsPerRead = 50000, overwrite = T,
## Importing the airline and weather data to Spark DataFrames
-Now we use the SparkR [read.df()](https://spark.apache.org/docs/latest/api/R/read.df.html) function to import the weather and airline data to Spark DataFrames. This function, like many other Spark methods, is executed lazily, meaning that they're queued for execution but not executed until required.
+Now we use the SparkR [read.df()](https://spark.apache.org/docs/3.3.0/api/R/reference/read.df.html) function to import the weather and airline data to Spark DataFrames. This function, like many other Spark methods, is executed lazily, meaning that they're queued for execution but not executed until required.
``` airPath <- file.path(inputDataDir, "AirOnTime08to12CSV")
weatherDF <- rename(weatherDF,
## Joining the weather and airline data
-We now use the SparkR [join()](https://spark.apache.org/docs/latest/api/R/join.html) function to do a left outer join of the airline and weather data by departure AirportID and datetime. The outer join allows us to retain all the airline data records even if there's no matching weather data. Following the join, we remove some redundant columns, and rename the kept columns to remove the incoming DataFrame prefix introduced by the join.
+We now use the SparkR [join()](https://spark.apache.org/docs/3.3.0/api/R/reference/join.html) function to do a left outer join of the airline and weather data by departure AirportID and datetime. The outer join allows us to retain all the airline data records even if there's no matching weather data. Following the join, we remove some redundant columns, and rename the kept columns to remove the incoming DataFrame prefix introduced by the join.
``` logmsg('Join airline data with weather at Origin Airport')
healthcare-apis Fhir Service Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-bicep.md
+
+ Title: Deploy Azure Health Data Services FHIR service using Bicep
+description: Learn how to deploy FHIR service by using Bicep
++++ Last updated : 05/27/2022++
+# Deploy a FHIR service within Azure Health Data Services using Bicep
+
+In this article, you'll learn how to deploy FHIR service within the Azure Health Data Services using Bicep.
+
+[Bicep](../../azure-resource-manager/bicep/overview.md) is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. It provides concise syntax, reliable type safety, and support for code reuse. Bicep offers the best authoring experience for your infrastructure-as-code solutions in Azure.
+
+## Prerequisites
+
+# [PowerShell](#tab/PowerShell)
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally:
+ * [Azure PowerShell](/powershell/azure/install-az-ps).
+
+# [CLI](#tab/CLI)
+
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+* If you want to run the code locally:
+ * A Bash shell (such as Git Bash, which is included in [Git for Windows](https://gitforwindows.org)).
+ * [Azure CLI](/cli/azure/install-azure-cli).
+++
+## Review the Bicep file
+
+The Bicep file used in this article is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-api-for-fhir/).
++
+The Bicep file defines three Azure resources:
+
+* [Microsoft.HealthcareApis/workspaces](/azure/templates/microsoft.healthcareapis/workspaces): create a Microsoft.HealthcareApis/workspaces resource.
+
+* [Microsoft.HealthcareApis/workspaces/fhirservices](/azure/templates/microsoft.healthcareapis/workspaces/fhirservices): create a Microsoft.HealthcareApis/workspaces/fhirservices resource.
+
+* [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts): create a Microsoft.Storage/storageAccounts resource.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters serviceName=<service-name> location=<location>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -serviceName "<service-name>" -location "<location>"
+ ```
+
+
+
+ Replace **\<service-name\>** with the name of the service. Replace **\<location\>** with the location of the Azure API for FHIR. Location options include:
+
+ * australiaeast
+ * eastus
+ * eastus2
+ * japaneast
+ * northcentralus
+ * northeurope
+ * southcentralus
+ * southeastasia
+ * uksouth
+ * ukwest
+ * westcentralus
+ * westeurope
+ * westus2
+
+ > [!NOTE]
+ > When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review the deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+> [!NOTE]
+> You can also verify that the FHIR service is up and running by opening a browser and navigating to `https://<yourfhirservice>.azurehealthcareapis.com/metadata`. If the
+> capability statement is automatically displayed or downloaded, your deployment was successful. Make sure to replace **\<yourfhirservice\>** with the **\<service-name\>** you
+> used in the deployment step of this quickstart.
+
+## Clean up the resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart guide, you've deployed the FHIR service within Azure Health Data Services using Bicep. For more information about FHIR service supported features, proceed to the following article:
+
+>[!div class="nextstepaction"]
+>[Supported FHIR Features](fhir-features-supported.md)
iot-dps Concepts Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-deploy-at-scale.md
+
+ Title: Best practices for large-scale Microsoft Azure IoT device deployments
+description: This article describes best practices, patterns, and sample code you can use to help with large-scale deployments.
++++ Last updated : 06/27/2022+++
+# Best practices for large-scale IoT device deployments
+
+Scaling an IoT solution to millions of devices can be challenging. Large-scale solutions often need to be designed in accordance with service and subscription limits. When customers use Azure IoT Device Provisioning Service, they use it in combination with other Azure IoT platform services and components, such as IoT Hub and Azure IoT device SDKs. This article describes best practices, patterns, and sample code you can incorporate in your design to take advantage of these services and allow your deployments to scale out. By following these simple patterns and practices right from the design phase of the project, you can maximize the performance of your IoT devices.
+
+## First-time device provisioning
+
+First-time provisioning is the process of onboarding a device for the first time as a part of an IoT solution. When working with large-scale deployments, it's important to schedule the provisioning process to avoid overload situations caused by all the devices attempting to connect at the same time.
+
+### Device deployment using a staggered provisioning schedule
+
+For deployment of devices in the scale of millions, registering all the devices at once may result in the DPS instance being overwhelmed due to throttling (HTTP response code `429, Too Many Requests`) and a failure to register your devices. To prevent such throttling, you should use a staggered registration schedule for the devices. The recommended batch size should be in accordance with DPS [quotas and limits](about-iot-dps.md#quotas-and-limits). For instance, if the registration rate is 200 devices per minute, the batch size for onboarding would be 200 devices per batch.
+
+### Timing logic when retrying operations
+
+If transient faults occur due to a service being busy, a retry logic enables devices to successfully connect to the IoT cloud. However, a large number of retries could further degrade a busy service that's running close to or at its capacity. As with any Azure service, you should implement an intelligent retry mechanism with exponential backoff. More information on different retry patterns can be found in [the Retry design pattern](/azure/architecture/patterns/retry) and [transient fault handling](/azure/architecture/best-practices/transient-faults).
+
+Rather than immediately retrying a deployment when throttled, you should wait until the time specified in the `retry-after` header. If there's no retry header available from the service, this algorithm can help achieve a smoother device onboarding experience:
+
+```console
+min_retry_delay_msec = 1000
+max_retry_delay_msec = (1.0 / <load>) * <T> * 1000
+max_random_jitter_msec = max_retry_delay_msec
+```
+
+Where `<load>` is a configurable factor with values > 0 (indicates that the load will perform at an average of load time multiplied by the number of connections per second) and `<T>` is the absolute minimum time to cold boot the devices (calculated as `T = N / cps` where `N` is the total number of devices and `cps` is the service limit for number of connections per second). In this case, devices should delay reconnecting for a random amount of time, between `min_retry_delay_msec` and `max_retry_delay_msec`.
+
+For more information on the timing of retry operations, see [Retry timing](https://github.com/Azure/azure-sdk-for-c/blob/main/sdk/docs/iot/mqtt_state_machine.md#retry-timing).
+
+## Reprovisioning devices
+
+Reprovisioning is the process where the device needs to be provisioned to an IoT Hub after having been successfully connected previously. There can be many reasons that result in a need for device to reconnect to an IoT Hub, such as:
+
+- A device could reboot due to power outage, loss in network connectivity, geo-relocation, firmware updates, factory reset, or certificate key rotation.
+- The IoT Hub instance could be unavailable due to an unplanned IoT Hub outage.
+
+You shouldn't need to provision every time the device reboots. Most devices that are reprovisioned end up connected to the same IoT hub in most scenarios. Instead, the device should attempt to directly connect to its IoT hub using the information that was cached from a previous successful connection.
+
+### Devices that can store a connection string
+
+If the devices have the ability to store the connection string to the previously provisioned and connected IoT Hub, use the same string to skip the entire reprovisioning process and directly connect to the IoT Hub. This reduces the latency in successfully connecting to the appropriate IoT Hub. There are two possible cases here:
+
+- The IoT Hub to connect upon device reboot is the same as the previously connected IoT Hub.
+
+ The connection string retrieved from the cache should work fine and the device must attempt to reconnect to the same endpoint. No need for a fresh start for the provisioning process.
+
+- The IoT Hub to connect upon device reboot is different from the previously connected IoT Hub.
+
+ The connection string stored in memory is inaccurate. Attempting to connect to the same endpoint won't be successful and so the retry mechanism for the IoT Hub connection is triggered. Once the threshold for the IoT Hub connection failure is reached, the retry mechanism automatically triggers a fresh start to the provisioning process.
+
+### Devices that can't store a connection string
+
+In certain scenarios, devices don't have a large enough footprint or memory to accommodate caching of the connection string from a past successful IoT Hub connection. You can use the [Device Registration Status Lookup API](/rest/api/iot-dps/device/runtime-registration/device-registration-status-lookup) to retrieve the connection string from the previous time the device was provisioned and then attempt a connection to that IoT Hub. At every device reboot, that API needs to be invoked to get the device registration status. If data related to a previously connected IoT Hub was returned by the API call, you can connect to the same IoT Hub. If the API returns a null payload, then there's no previous connection available and the reprovisioning process through DPS is automatically triggered.
+
+### Reprovisioning sample
+
+These code examples show a class for reading to and writing from the device cache, followed by code that attempts to reconnect a device to the IoT Hub if a connection string is found and reprovisioning through DPS if it isn't.
+
+```csharp
+using Newtonsoft.Json;
+using System;
+using System.Collections.Generic;
+using System.IO;
+using System.Linq;
+using System.Text;
+
+namespace ProvisioningCache
+{
+ public class ProvisioningDetailsFileStorage : IProvisioningDetailCache
+ {
+ private string dataDirectory = null;
+
+ public ProvisioningDetailsFileStorage()
+ {
+ dataDirectory = Environment.GetEnvironmentVariable("ProvisioningDetailsDataDirectory");
+ }
+
+ public ProvisioningResponse GetProvisioningDetailResponseFromCache(string registrationId)
+ {
+ try
+ {
+ var provisioningResponseFile = File.ReadAllText(Path.Combine(dataDirectory, registrationId));
+
+ ProvisioningResponse response = JsonConvert.DeserializeObject<ProvisioningResponse>(provisioningResponseFile);
+
+ return response;
+ }
+ catch (Exception ex)
+ {
+ return null;
+ }
+ }
+
+ public void SetProvisioningDetailResponse(string registrationId, ProvisioningResponse provisioningDetails)
+ {
+ var provisioningDetailsJson = JsonConvert.SerializeObject(provisioningDetails);
+
+ File.WriteAllText(Path.Combine(dataDirectory, registrationId), provisioningDetailsJson);
+ }
+ }
+}
+```
+
+You could use code similar to the following to determine how to proceed with reconnecting a device after determining whether there's connection info in the cache:
+
+```csharp
+IProvisioningDetailCache provisioningDetailCache = new ProvisioningDetailsFileStorage();
+
+var provisioningDetails = provisioningDetailCache.GetProvisioningDetailResponseFromCache(registrationId);
+
+// If no info is available in cache, go through DPS for provisioning
+if(provisioningDetails == null)
+{
+ logger.LogInformation($"Initializing the device provisioning client...");
+ using var transport = new ProvisioningTransportHandlerAmqp();
+ ProvisioningDeviceClient provClient = ProvisioningDeviceClient.Create(dpsEndpoint, dpsScopeId, security, transport);
+ logger.LogInformation($"Initialized for registration Id {security.GetRegistrationID()}.");
+ logger.LogInformation("Registering with the device provisioning service... ");
+
+ // This method will attempt to retry in case of a transient fault
+ DeviceRegistrationResult result = await registerDevice(provClient);
+ provisioningDetails = new ProvisioningResponse() { iotHubHostName = result.AssignedHub, deviceId = result.DeviceId };
+ provisioningDetailCache.SetProvisioningDetailResponse(registrationId, provisioningDetails);
+}
+
+// If there was IoT Hub info from previous provisioning in the cache, try connecting to the IoT Hub directly
+// If trying to connect to the IoT Hub returns status 429, make sure to retry operation honoring
+// the retry-after header
+// If trying to connect to the IoT Hub returns a 500-series server error, have an exponential backoff with
+// at least 5 seconds of wait-time
+// For all response codes 429 and 5xx, reprovision through DPS
+// Ideally, you should also support a method to manually trigger provisioning on demand
+if (provisioningDetails != null)
+{
+ logger.LogInformation($"Device {provisioningDetails.deviceId} registered to {provisioningDetails.iotHubHostName}.");
+ logger.LogInformation("Creating TPM authentication for IoT Hub...");
+ IAuthenticationMethod auth = new DeviceAuthenticationWithTpm(provisioningDetails.deviceId, security);
+ logger.LogInformation($"Testing the provisioned device with IoT Hub...");
+ DeviceClient iotClient = DeviceClient.Create(provisioningDetails.iotHubHostName, auth, TransportType.Amqp);
+ logger.LogInformation($"Registering the Method Call back for Reprovisioning...");
+ await iotClient.SetMethodHandlerAsync("Reprovision",reprovisionDirectMethodCallback, iotClient);
+
+ // Now you should start a thread into this method and do your business while the DeviceClient is still connected
+ await startBackgroundWork(iotClient);
+ logger.LogInformation("Wait until closed...");
+
+ // Wait until the app unloads or is cancelled
+ var cts = new CancellationTokenSource();
+ AssemblyLoadContext.Default.Unloading += (ctx) => cts.Cancel();
+ Console.CancelKeyPress += (sender, cpe) => cts.Cancel();
+
+ await WhenCancelled(cts.Token);
+ await iotClient.CloseAsync();
+ Console.WriteLine("Finished.");
+}
+```
+
+## IoT Hub connectivity considerations
+
+- Any single IoT hub is limited to 1 million devices plus modules. If you plan to have more than a million devices, cap the number of devices to 1 million per hub and add hubs as needed when increasing the scale of your deployment. For more information, see [IoT Hub quotas](../iot-hub/iot-hub-devguide-quotas-throttling.md).
+- If you have plans for more than a million devices and you need to support them in a specific region (such as in an EU region for data residency requirements), you can [contact us](../iot-fundamentals/iot-support-help.md) to ensure that the region you're deploying to has the capacity to support your current and future scale.
+
+Recommended device logic when connecting to IoT Hub via DPS:
+
+- On first boot, devices should go use the [DPS registration API](/rest/api/iot-dps/device/runtime-registration/register-device) to register.
+- On subsequent boots, devices should:
+ - If possible, cache their provisioning details and connect using this information from this cache.
+ - If they can't cache IoT hub connection information, use the [Device Registration Status Lookup API](/rest/api/iot-dps/device/runtime-registration/device-registration-status-lookup) to return connection information once registration has been done. This API call is a much lighter weight operation for DPS than a full device registration operation.
+ - For devices in either case described above, devices should use the following logic in response to error codes when connecting:
+ - When receiving any of the 500-series of server error responses, retry the connection using either cached credentials or the results of a Device Registration Status Lookup API call.
+ - When receiving `401, Unauthorized` or `403, Forbidden` or `404, Not Found`, perform a full re-registration by calling the [DPS registration API](/rest/api/iot-dps/device/runtime-registration/register-device).
+- At any time, devices should be capable of responding to a user-initiated reprovisioning command.
+
+Other IoT Hub scenarios when using DPS:
+
+- IoT Hub failover: Devices should continue to work as connection information shouldn't change and logic is in place to retry the connection once the hub is available again.
+- Change of IoT Hub: Assigning devices to a different IoT Hub should be done by using a [custom allocation policy](tutorial-custom-allocation-policies.md).
+- Retry IoT Hub connection: You shouldn't use an aggressive retry strategy, instead allowing a gap of at least a minute before a retry.
+- IoT Hub partitions: If your device strategy leans heavily on telemetry, the number of device-to-cloud partitions should be increased.
+
+## Monitoring devices
+
+An important part of the overall deployment is monitoring the solution end-to-end to make sure that the system is performing appropriately. There are several ways to monitor the health of a service for large-scale deployment of IoT devices. The following patterns have proven effective in monitoring the service:
+
+- Create an application to query each enrollment group on a DPS instance, get the total devices registered to that group, and then aggregate the numbers from across various enrollment groups. This number provides an exact count of the devices that are currently registered via DPS and can be used to monitor the state of the service.
+- Monitor device registrations over a specific period. For instance, monitor registration rates for a DPS instance over the prior five days. Note that this approach only provides an approximate figure and is also capped to a time period.
+
+## Next steps
+
+- [Provision devices across load-balanced IoT Hubs](tutorial-provision-multiple-hubs.md)
+- [Retry timing](https://github.com/Azure/azure-sdk-for-c/blob/main/sdk/docs/iot/mqtt_state_machine.md#retry-timing) when retrying operations
load-balancer Load Balancer Floating Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-floating-ip.md
To get the list of interface names you have on your VM, type this command:
```console ip addr ```
-For each loopback interface, repeat these commands, which assigns the floating IP to the loopback alias:
+For each loopback interface, repeat these commands, which assign the floating IP to the loopback alias:
```console sudo ip addr add floatingip/floatingipnetmask dev lo:0
sudo ufw allow 80/tcp
- Learn more about [Azure Load Balancer](load-balancer-overview.md). - Learn about [Health Probes](load-balancer-custom-probe-overview.md). - Learn about [Standard Load Balancer Diagnostics](load-balancer-standard-diagnostics.md).-- Learn more about [Network Security Groups](../virtual-network/network-security-groups-overview.md).
+- Learn more about [Network Security Groups](../virtual-network/network-security-groups-overview.md).
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
Previously updated : 04/25/2022 Last updated : 06/27/2022 # Azure Machine Learning Python SDK release notes
In this article, learn about Azure Machine Learning Python SDK releases. For th
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2022-06-27
+
+ + **azureml-automl-dnn-nlp**
+ + Remove duplicate labels column from multi-label predictions
+ + **azureml-contrib-automl-pipeline-steps**
+ + Many Models now provides the capability to generate prediction output in csv format as well. - Many Models prediction will now include column names in the output file in case of **csv** file format.
+ + **azureml-core**
+ + ADAL authentication is now deprecated and all authentication classes now use MSAL authentication. Please install azure-cli>=2.30.0 to utilize MSAL based authentication when using AzureCliAuthentication class.
+ + Added fix to force environment registration when `Environment.build(workspace)`. The fix solves confusion of the latest environment built instead of the asked one when environment is cloned or inherited from another instance.
+ + SDK warning message to restart Compute Instance before May 31, 2022, if it was created before September 19, 2021
+ + **azureml-interpret**
+ + Updated azureml-interpret package to interpret-community 0.26.*
+ + In the azureml-interpret package, add ability to get raw and engineered feature names from scoring explainer. Also, add example to the scoring notebook to get feature names from the scoring explainer and add documentation about raw and engineered feature names.
+ + **azureml-mlflow**
+ + azureml-core as a dependency of azureml-mlflow has been removed. - MLflow projects and local deployments will require azureml-core and needs to be installed separately.
+ + Adding support for creating endpoints and deploying to them via the MLflow client plugin.
+ + **azureml-responsibleai**
+ + Updated azureml-responsibleai package and environment images to latest responsibleai and raiwidgets 0.19.0 release
+ + **azureml-train-automl-client**
+ + Now OutputDatasetConfig is supported as the input of the MM/HTS pipeline builder. The mappings are: 1) OutputTabularDatasetConfig -> treated as unpartitioned tabular dataset. 2) OutputFileDatasetConfig -> treated as filed dataset.
+ + **azureml-train-automl-runtime**
+ + Added data validation that requires the number of minority class samples in the dataset to be at least as much as the number of CV folds requested.
+ + Automatic cross-validation parameter configuration is now available for automl forecasting tasks. Users can now specify "auto" for n_cross_validations and cv_step_size or leave them empty, and automl will provide those configurations base on your data. However, currently this feature is not supported when TCN is enabled.
+ + Forecasting Parameters in Many Models and Hierarchical Time Series can now be passed via object rather than using individual parameters in dictionary.
+ + Enabled forecasting model endpoints with quantiles support to be consumed in PowerBI.
+ + Updated automl scipy dependency upper bound to 1.5.3 from 1.5.2
+ ## 2022-04-25 ### Azure Machine Learning SDK for Python v1.41.0
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-fpga-web-service.md
client = PredictionClient(address=address,
service_name=aks_service.name) ```
-Since this classifier was trained on the [ImageNet](http://www.image-net.org/) data set, map the classes to human-readable labels.
+Since this classifier was trained on the ImageNet data set, map the classes to human-readable labels.
```python import requests
managed-grafana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/overview.md
You can create dashboards instantaneously by importing existing charts directly
> [!div class="nextstepaction"] > [Create an Azure Managed Grafana Preview instance using the Azure portal](./quickstart-managed-grafana-portal.md)+
managed-grafana Quickstart Managed Grafana Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-cli.md
Get started by creating an Azure Managed Grafana Preview workspace using the Azu
> [!NOTE] > The CLI experience for Azure Managed Grafana Preview is part of the amg extension for the Azure CLI (version 2.30.0 or higher). The extension will automatically install the first time you run an `az grafana` command.
+> [!NOTE]
+> Azure Managed Grafana doesn't support personal [Microsoft accounts](https://account.microsoft.com) currently.
+ ## Prerequisite An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
managed-grafana Quickstart Managed Grafana Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-portal.md
Get started by creating an Azure Managed Grafana Preview workspace using the Azu
An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+> [!NOTE]
+> Azure Managed Grafana doesn't support personal [Microsoft accounts](https://account.microsoft.com) currently.
+ ## Create a Managed Grafana workspace 1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
migrate Common Questions Discovery Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-discovery-assessment.md
ms. Previously updated : 06/09/2020 Last updated : 05/05/2022
This article answers common questions about discovery, assessment, and dependenc
- [General questions](resources-faq.md) about Azure Migrate - Questions about the [Azure Migrate appliance](common-questions-appliance.md) - Questions about [server migration](common-questions-server-migration.md)-- Get questions answered in the [Azure Migrate forum](https://social.msdn.microsoft.com/forums/azure/home?forum=AzureMigrate
+- Get questions answered in the [Azure Migrate forum](https://social.msdn.microsoft.com/forums/azure/home?forum=AzureMigrate)
## What geographies are supported for discovery and assessment with Azure Migrate?
You can discover up to 10,000 servers from VMware environment, up to 5,000 serve
## How do I choose the assessment type? -- Use **Azure VM assessments** when you want to assess servers from your on-premises [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs. [Learn More](concepts-assessment-calculation.md)-- Use assessment type **Azure SQL** when you want to assess your on-premises SQL Server from your VMware environment for migration to Azure SQL Database or Azure SQL Managed Instance. [Learn More](concepts-assessment-calculation.md)-- Use assessment type **Azure App Service** when you want to assess your on-premises ASP.NET web apps running on IIS web server from your VMware environment for migration to Azure App Service. [Learn More](concepts-assessment-calculation.md)-- Use **Azure VMware Solution (AVS)** assessments when you want to assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md)
+- Use **Azure VM assessments** when you want to assess servers from your on-premises [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs. [Learn More](concepts-assessment-calculation.md).
+- Use assessment type **Azure SQL** when you want to assess your on-premises SQL Server from your VMware environment for migration to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance. [Learn More](concepts-azure-sql-assessment-calculation.md).
+- Use assessment type **Azure App Service** when you want to assess your on-premises ASP.NET web apps running on IIS web server from your VMware environment for migration to Azure App Service. [Learn More](concepts-assessment-calculation.md).
+- Use **Azure VMware Solution (AVS)** assessments when you want to assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md).
- You can use a common group with VMware machines only to run both types of assessments. If you are running AVS assessments in Azure Migrate for the first time, it is advisable to create a new group of VMware machines. ## Why is performance data missing for some/all servers in my Azure VM and/or AVS assessment report?
You can discover up to 10,000 servers from VMware environment, up to 5,000 serve
For "Performance-based" assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance cannot collect performance data for the on-premises servers. Check: - If the servers are powered on for the duration for which you are creating the assessment-- If only memory counters are missing and you are trying to assess servers in Hyper-V environment. In this scenario, please enable dynamic memory on the servers and 'Recalculate' the assessment to reflect the latest changes. The appliance can collect memory utilization values for severs in Hyper-V environment only when the server has dynamic memory enabled.
+- If only memory counters are missing and you are trying to assess servers in Hyper-V environment. In this scenario, enable dynamic memory on the servers and 'Recalculate' the assessment to reflect the latest changes. The appliance can collect memory utilization values for severs in Hyper-V environment only when the server has dynamic memory enabled.
- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed.
For "Performance-based" assessment, the assessment report export says 'Percentag
## Why is performance data missing for some/all SQL instances/databases in my Azure SQL assessment?
-To ensure performance data is collected, please check:
+To ensure performance data is collected, check:
-- If the SQL Servers are powered on for the duration for which you are creating the assessment-- If the connection status of the SQL agent in Azure Migrate is 'Connected' and check the last heartbeat -- If Azure Migrate connection status for all SQL instances is 'Connected' in the discovered SQL instance blade-- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed
+- If the SQL Servers are powered on for the duration for which you are creating the assessment.
+- If the connection status of the SQL agent in Azure Migrate is 'Connected', and check the last heartbeat.
+- If Azure Migrate connection status for all SQL instances is 'Connected' in the discovered SQL instance blade.
+- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed.
If any of the performance counters are missing, Azure SQL assessment recommends the smallest Azure SQL configuration for that instance/database.
Performance data is not captured for Azure App Service assessment and hence you
The confidence rating is calculated for "Performance-based" assessments based on the percentage of [available data points](./concepts-assessment-calculation.md#ratings) needed to compute the assessment. Below are the reasons why an assessment could get a low confidence rating: -- You did not profile your environment for the duration for which you are creating the assessment. For example, if you are creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you cannot wait for the duration, please change the performance duration to a smaller period and **Recalculate** the assessment.-- Assessment is not able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, please ensure that:
+- You did not profile your environment for the duration for which you are creating the assessment. For example, if you are creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you cannot wait for the duration, change the performance duration to a smaller period and **Recalculate** the assessment.
+- Assessment is not able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, ensure that:
- Servers are powered on for the duration of the assessment - Outbound connections on ports 443 are allowed
- - For Hyper-V Servers dynamic memory is enabled
+ - For Hyper-V Servers, dynamic memory is enabled
- The connection status of agents in Azure Migrate are 'Connected' and check the last heartbeat - For Azure SQL assessments, Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance blade
- Please **Recalculate** the assessment to reflect the latest changes in confidence rating.
+ **Recalculate** the assessment to reflect the latest changes in confidence rating.
-- For Azure VM and AVS assessments, few servers were created after discovery had started. For example, if you are creating an assessment for the performance history of last one month, but few servers were created in the environment only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-assessment-calculation.md#confidence-ratings-performance-based)-- For Azure SQL assessments, few SQL instances or databases were created after discovery had started. For example, if you are creating an assessment for the performance history of last one month, but few SQL instances or databases were created in the environment only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-azure-sql-assessment-calculation.md#confidence-ratings)
+- For Azure VM and AVS assessments, few servers were created after discovery had started. For example, if you are creating an assessment for the performance history of last one month, but few servers were created in the environment only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-assessment-calculation.md#confidence-ratings-performance-based).
+- For Azure SQL assessments, few SQL instances or databases were created after discovery had started. For example, if you are creating an assessment for the performance history of last one month, but few SQL instances or databases were created in the environment only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-azure-sql-assessment-calculation.md#confidence-ratings).
## Why is my RAM utilization greater than 100%?
There could be two reasons:
## The number of Azure VM or AVS assessments on the Discovery and assessment tool are incorrect
- To remediate this, click on the total number of assessments to navigate to all the assessments and recalculate the Azure VM or AVS assessment. The discovery and assessment tool will then show the correct count for that assessment type.
+ To remediate this, click the total number of assessments to navigate to all the assessments and recalculate the Azure VM or AVS assessment. The discovery and assessment tool will then show the correct count for that assessment type.
## I want to try out the new Azure SQL assessment
-Discovery and assessment of SQL Server instances and databases running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, please ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+Discovery and assessment of SQL Server instances and databases running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
## I want to try out the new Azure App Service assessment
-Discovery and assessment of .NET web apps running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, please ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+Discovery and assessment of .NET web apps running in your VMware environment is now in preview. Get started with [this tutorial](tutorial-discover-vmware.md). If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
## I can't see some servers when I am creating an Azure SQL assessment -- Azure SQL assessment can only be done on servers running where SQL instances were discovered. If you don't see the servers and SQL instances that you wish to assess, please wait for some time for the discovery to get completed and then create the assessment.-- If you are not able to see a previously created group while creating the assessment, please remove any non-VMware server or any server without a SQL instance from the group.
+- Azure SQL assessment can only be done on servers running where SQL instances were discovered. If you don't see the servers and SQL instances that you wish to assess, wait for some time for the discovery to get completed and then create the assessment.
+- If you are not able to see a previously created group while creating the assessment, remove any non-VMware server or any server without a SQL instance from the group.
- If you are running Azure SQL assessments in Azure Migrate for the first time, it is advisable to create a new group of servers. ## I can't see some servers when I am creating an Azure App Service assessment -- Azure App Service assessment can only be done on servers running where web server role was discovered. If you don't see the servers that you wish to assess, please wait for some time for the discovery to get completed and then create the assessment.-- If you are not able to see a previously created group while creating the assessment, please remove any non-VMware server or any server without a web app from the group.
+- Azure App Service assessment can only be done on servers running where web server role was discovered. If you don't see the servers that you wish to assess, wait for some time for the discovery to get completed and then create the assessment.
+- If you are not able to see a previously created group while creating the assessment, remove any non-VMware server or any server without a web app from the group.
- If you are running Azure App Service assessments in Azure Migrate for the first time, it is advisable to create a new group of servers. ## I want to understand how was the readiness for my instance computed?
-The readiness for your SQL instances has been computed after doing a feature compatibility check with the targeted Azure SQL deployment type (Azure SQL Database or Azure SQL Managed Instance). [Learn more](./concepts-azure-sql-assessment-calculation.md#calculate-readiness)
+The readiness for your SQL instances has been computed after doing a feature compatibility check with the targeted Azure SQL deployment type (SQL Server on Azure VM or Azure SQL Managed Instance or Azure SQL Database). [Learn more](./concepts-azure-sql-assessment-calculation.md#calculate-readiness).
## I want to understand how was the readiness for my web apps is computed?
If there are on-premises changes to servers that are in a group that's been asse
- Disk size change(GB Allocated) - Nic properties update. Example: Mac address changes, IP address addition etc.
-Please **Recalculate** the assessment to reflect the latest changes in the assessment.
+**Recalculate** the assessment to reflect the latest changes in the assessment.
### Azure SQL assessment
If there are changes to on-premises SQL instances and databases that are in a gr
- Total database size in a SQL instance changed by more than 20% - Change in number of processor cores and/or allocated memory
-Please **Recalculate** the assessment to reflect the latest changes in the assessment.
+**Recalculate** the assessment to reflect the latest changes in the assessment.
## Why was I recommended a particular target deployment type?
-Azure Migrate recommends a specific Azure SQL deployment type that is compatible with your SQL instance. Migrating to a Microsoft recommended target reduces your overall migration effort. This Azure SQL configuration (SKU) has been recommended after considering the performance characteristics of your SQL instance and the databases it manages. If multiple Azure SQL configurations are eligible, we recommend the one, which is the most cost effective. [Learn more](./concepts-azure-sql-assessment-calculation.md#calculate-sizing)
+Azure Migrate recommends a specific Azure SQL deployment type that is compatible with your SQL instance. Migrating to a Microsoft recommended target reduces your overall migration effort. This Azure SQL configuration (SKU) has been recommended after considering the performance characteristics of your SQL instance and the databases it manages. If multiple Azure SQL configurations are eligible, we recommend the one, which is the most cost effective. [Learn more](./concepts-azure-sql-assessment-calculation.md#calculate-sizing).
## What deployment target should I choose if my SQL instance is ready for Azure SQL DB and Azure SQL MI? If your instance is ready for both Azure SQL DB and Azure SQL MI, we recommend the target deployment type for which the estimated cost of Azure SQL configuration is lower.
-## Why is my instance marked as Potentially ready for Azure VM in my Azure SQL assessment?
-
-This can happen when the target deployment type chosen in the assessment properties is **Recommended** and the SQL instance is not ready for Azure SQL Database and Azure SQL Managed Instance. The user is recommended to create an assessment in Azure migrate with assessment type as **Azure VM** to determine if the Server on which the instance is running is ready to migrate to an Azure VM.
-The user is recommended to create an assessment in Azure Migrate with assessment type as **Azure VM** to determine if the server on which the instance is running is ready to migrate to an Azure VM instead:
--- Azure VM assessments in Azure Migrate are currently lift-an-shift focused and will not consider the specific performance metrics for running SQL instances and databases on the Azure virtual machine.-- When you run an Azure VM assessment on a server, the recommended size and cost estimates will be for all instances running on the server and can be migrated to an Azure VM using the Server Migration tool. Before you migrate, [review the performance guidelines](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist) for SQL Server on Azure virtual machines.- ## I can't see some databases in my assessment even though the instance is part of the assessment
-The Azure SQL assessment only includes databases that are in online status. In case the database is in any other status, the assessment ignores the readiness, sizing, and cost calculation for such databases. In case you wish you assess such databases, please change the status of the database and recalculate the assessment in some time.
+The Azure SQL assessment only includes databases that are in online status. In case the database is in any other status, the assessment ignores the readiness, sizing, and cost calculation for such databases. In case you wish you assess such databases, change the status of the database and recalculate the assessment in some time.
-## I want to compare costs for running my SQL instances on Azure VM Vs Azure SQL Database/Azure SQL Managed Instance
+## I want to compare costs for running my SQL instances on Azure VM vs Azure SQL Database/Azure SQL Managed Instance
-You can create an assessment with type **Azure VM** on the same group that was used in your **Azure SQL** assessment. You can then compare the two reports side by side. Though, Azure VM assessments in Azure Migrate are currently lift-and-shift focused and will not consider the specific performance metrics for running SQL instances and databases on the Azure virtual machine. When you run an Azure VM assessment on a server, the recommended size and cost estimates will be for all instances running on the server and can be migrated to an Azure VM using the Server Migration tool. Before you migrate, [review the performance guidelines](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist) for SQL Server on Azure virtual machines.
+You can create an assessment with type **Azure VM** on the same group that was used in your **Azure SQL** assessment. You can then compare the two reports side by side. Though, Azure VM assessments in Azure Migrate are currently lift-and-shift focused and will not consider the specific performance metrics for running SQL instances and databases on the Azure virtual machine. When you run an Azure VM assessment on a server, the recommended size and cost estimates will be for all instances running on the server and can be migrated to an Azure VM using the Server Migration tool. Before you migrate, [review the performance guidelines](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md) for SQL Server on Azure virtual machines.
## The storage cost in my Azure SQL assessment is zero
-For Azure SQL Managed Instance, there is no storage cost added for the first 32 GB/instance/month storage and additional storage cost is added for storage in 32 GB increments. [Learn More](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/)
+For Azure SQL Managed Instance, there is no storage cost added for the first 32 GB/instance/month storage and additional storage cost is added for storage in 32 GB increments. [Learn More](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/).
## I can't see some groups when I am creating an Azure VMware Solution (AVS) assessment
Yes, Azure Migrate requires vCenter Server in a VMware environment to perform di
With as-on-premises sizing, Azure Migrate doesn't consider server performance data for assessment. Azure Migrate assesses VM sizes based on the on-premises configuration. With performance-based sizing, sizing is based on utilization data.
-For example, if an on-premises server has four cores and 8 GB of memory at 50% CPU utilization and 50% memory utilization:
+For example, if an on-premises server has 4 cores and 8 GB of memory at 50% CPU utilization and 50% memory utilization:
-- As-on-premises sizing will recommend an Azure VM SKU that has four cores and 8 GB of memory.-- Performance-based sizing will recommend a VM SKU that has two cores and 4 GB of memory because the utilization percentage is considered.
+- As-on-premises sizing will recommend an Azure VM SKU that has 4 cores and 8 GB of memory.
+- Performance-based sizing will recommend a VM SKU that has 2 cores and 4 GB of memory because the utilization percentage is considered.
Similarly, disk sizing depends on sizing criteria and storage type:
For agent-based dependency visualization:
For agent-based visualization, you can visualize dependencies for up to one hour. You can go back as far as one month to a specific date in history, but the maximum duration for visualization is one hour. For example, you can use the time duration in the dependency map to view dependencies for yesterday, but you can view dependencies only for a one-hour window. However, you can use Azure Monitor logs to [query dependency data](./how-to-create-group-machine-dependencies.md) for a longer duration.
-For agentless visualization, you can view the dependency map of a single server from a duration of between one hour and 30 days.
+For agentless visualization, you can view the dependency map of a single server from a duration of between an hour and 30 days.
## Can I visualize dependencies for groups of more than 10 servers?
migrate Concepts Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md
If you use performance-based sizing in an Azure VM assessment, the assessment ma
- The assessment considers the performance history of the server to identify the VM size and disk type in Azure. > [!NOTE]
-> If you import serves by using a CSV file, the performance values you specify (CPU utilization, Memory utilization, Disk IOPS and throughput) are used if you choose performance-based sizing. You will not be able to provide performance history and percentile information.
+> If you import servers by using a CSV file, the performance values you specify (CPU utilization, Memory utilization, Disk IOPS and throughput) are used if you choose performance-based sizing. You will not be able to provide performance history and percentile information.
- This method is especially helpful if you've overallocated the on-premises server, utilization is low, and you want to rightsize the Azure VM to save costs. - If you don't want to use the performance data, reset the sizing criteria to as-is on-premises, as described in the previous section.
migrate Concepts Azure Sql Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-sql-assessment-calculation.md
description: Learn about Azure SQL assessments in Azure Migrate Discovery and as
Previously updated : 02/07/2021 Last updated : 05/05/2022 # Assessment Overview (migrate to Azure SQL)
-This article provides an overview of assessments for migrating on-premises SQL Server instances from a VMware environment to Azure SQL databases or Managed Instances using the [Azure Migrate: Discovery and assessment tool](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool).
+This article provides an overview of assessments for migrating on-premises SQL Server instances from a VMware environment to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance using the [Azure Migrate: Discovery and assessment tool](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool).
## What's an assessment? An assessment with the Discovery and assessment tool is a point in time snapshot of data and measures the readiness and estimates the effect of migrating on-premises servers to Azure.
There are three types of assessments you can create using the Azure Migrate: Dis
**Assessment Type** | **Details** | **Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. <br/><br/> You can assess your on-premises servers in [VMware](how-to-set-up-appliance-vmware.md) and [Hyper-V](how-to-set-up-appliance-hyper-v.md) environment, and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure VMs using this assessment type.
-**Azure SQL** | Assessments to migrate your on-premises SQL servers from your VMware environment to Azure SQL Database or Azure SQL Managed Instance. <br/><br/> If your SQL servers are running on a non-VMware platform, you can assess readiness by using the [Data Migration Assistant](/sql/dma/dma-assess-sql-data-estate-to-sqldb).
+**Azure SQL** | Assessments to migrate your on-premises SQL servers from your VMware environment to SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance. <br/><br/> If your SQL servers are running on a non-VMware platform, you can assess readiness by using the [Data Migration Assistant](/sql/dma/dma-assess-sql-data-estate-to-sqldb).
**Azure App Service** | Assessments to migrate your on-premises ASP.NET web apps, running on IIS web servers, from your VMware environment to Azure App Service.
-**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md)
+**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type. [Learn more](concepts-azure-vmware-solution-assessment-calculation.md).
> [!NOTE] > If the number of Azure VM or AVS assessments are incorrect on the Discovery and assessment tool, click on the total number of assessments to navigate to all the assessments and recalculate the Azure VM or AVS assessments. The Discovery and assessment tool will then show the correct count for that assessment type.
An Azure SQL assessment provides one sizing criteria:
## How do I assess my on-premises SQL servers?
-You can assess your on-premises SQL Server instances by using the configuration and utilization data collected by a lightweight Azure Migrate appliance. The appliance discovers on-premises SQL server instances and databases and sends the configuration and performance data to Azure Migrate. [Learn More](how-to-set-up-appliance-vmware.md)
+You can assess your on-premises SQL Server instances by using the configuration and utilization data collected by a lightweight Azure Migrate appliance. The appliance discovers on-premises SQL server instances and databases and sends the configuration and performance data to Azure Migrate. [Learn More](how-to-set-up-appliance-vmware.md).
## How do I assess with the appliance? If you're deploying an Azure Migrate appliance to discover on-premises servers, do the following steps:
The appliance collects performance data for compute settings with these steps:
## What properties are used to create and customize an Azure SQL assessment?
-Here's what's included in Azure SQL assessment properties:
-
-**Property** | **Details**
- |
-**Target location** | The Azure region to which you want to migrate. Azure SQL configuration and cost recommendations are based on the location that you specify.
-**Target deployment type** | The target deployment type you want to run the assessment on: <br/><br/> Select **Recommended**, if you want Azure Migrate to assess the readiness of your SQL servers for migrating to Azure SQL MI and Azure SQL DB, and recommend the best suited target deployment option, target tier, Azure SQL configuration and monthly estimates.<br/><br/>Select **Azure SQL DB**, if you want to assess your SQL servers for migrating to Azure SQL Databases only and review the target tier, Azure SQL DB configuration and monthly estimates.<br/><br/>Select **Azure SQL MI**, if you want to assess your SQL servers for migrating to Azure SQL Databases only and review the target tier, Azure SQL MI configuration and monthly estimates.
-**Reserved capacity** | Specifies reserved capacity so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved capacity option, you can't specify ΓÇ£Discount (%)ΓÇ¥.
-**Sizing criteria** | This property is used to right-size the Azure SQL configuration. <br/><br/> It is defaulted to **Performance-based** which means the assessment will collect the SQL Server instances and databases performance metrics to recommend an optimal-sized Azure SQL Managed Instance and/or Azure SQL Database tier/configuration recommendation.
-**Performance history** | Performance history specifies the duration used when performance data is evaluated.
-**Percentile utilization** | Percentile utilization specifies the percentile value of the performance sample used for rightsizing.
-**Comfort factor** | The buffer used during assessment. It accounts for issues like seasonal usage, short performance history, and likely increases in future usage.<br/><br/> For example, a 10-core instance with 20% utilization normally results in a two-core instance. With a comfort factor of 2.0, the result is a four-core instance instead.
-**Offer/Licensing program** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. Currently you can only choose from Pay-as-you-go and Pay-as-you-go Dev/Test. Note that you can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer.
-**Service tier** | The most appropriate service tier option to accommodate your business needs for migration to Azure SQL Database and/or Azure SQL Managed Instance:<br/><br/>**Recommended** if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical. <br/><br/> **General Purpose** If you want an Azure SQL configuration designed for budget-oriented workloads. [Learn More](/azure/azure-sql/database/service-tier-general-purpose) <br/><br/> **Business Critical** If you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers. [Learn More](/azure/azure-sql/database/service-tier-business-critical)
-**Currency** | The billing currency for your account.
-**Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
-**Azure Hybrid Benefit** | Specifies whether you already have a SQL Server license. <br/><br/> If you do and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
+The Azure SQL assessment properties include:
+
+**Section** | **Property** | **Details**
+| | |
+Target and pricing settings | **Target location** | The Azure region to which you want to migrate. Azure SQL configuration and cost recommendations are based on the location that you specify.
+Target and pricing settings | **Environment type** | The environment for the SQL deployments to apply pricing applicable to Production or Dev/Test.
+Target and pricing settings | **Offer/Licensing program** |The Azure offer if you're enrolled. Currently the field is defaulted to Pay-as-you-go, which will give you retail Azure prices. <br/><br/>You can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer.<br/>You can apply Azure Hybrid Benefit on top of Pay-as-you-go offer and Dev/Test environment. The assessment does not support applying Reserved Capacity on top of Pay-as-you-go offer and Dev/Test environment. <br/>If the offer is set to *Pay-as-you-go* and Reserved capacity is set to *No reserved instances*, the monthly cost estimates are calculated by multiplying number of hours chosen in the VM uptime field with the hourly price of the recommended SKU.
+Target and pricing settings | **Reserved Capacity** | You can specify reserved capacity so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved capacity option, you can't specify "Discount (%)" or "VM uptime". <br/>If the Reserved capacity is set to *1 year reserved* or *3 years reserved*, the monthly cost estimates are calculated by multiplying 744 hours in the VM uptime field with the hourly price of the recommended SKU.
+Target and pricing settings | **Currency** | The billing currency for your account.
+Target and pricing settings | **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
+Target and pricing settings | **VM uptime** | You can specify the duration (days per month/hour per day) that servers/VMs will run. This is useful for computing cost estimates for SQL Server on Azure VM where you are aware that Azure VMs might not run continuously. <br/> Cost estimates for servers where recommended target is *SQL Server on Azure VM* are based on the duration specified. Default is 31 days per month/24 hours per day.
+Target and pricing settings | **Azure Hybrid Benefit** | You can specify whether you already have a Windows Server and/or SQL Server license. Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance-enabled Windows Server and SQL Server licenses on Azure. For example, if you have SQL Server license and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
+Assessment criteria | **Sizing criteria** | Defaulted to *Performance-based*, which means Azure migrate will collect performance metrics pertaining to SQL instances and the databases managed by it to recommend an optimal-sized SQL Server on Azure VM and/or Azure SQL Database and/or Azure SQL Managed Instance configuration.
+Assessment criteria | **Performance history** | You can indicate the data duration on which you want to base the assessment. (Default is one day)
+Assessment criteria | **Percentile utilization** | You can indicate the percentile value you want to use for the performance sample. (Default is 95th percentile)
+Assessment criteria | **Comfort factor** | You can indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage.
+Azure SQL Managed Instance sizing | **Service Tier** | You can choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Managed Instance:<br/><br/>Select *Recommended* if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical.<br/><br/>Select *General Purpose* if you want an Azure SQL configuration designed for budget-oriented workloads.<br/><br/>Select *Business Critical* if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers.
+Azure SQL Managed Instance sizing | **Instance type** | Defaulted to *Single instance*.
+Azure SQL Managed Instance sizing | **Pricing Tier** | Defaulted to *Standard*.
+SQL Server on Azure VM sizing | **VM series** | You can specify the Azure VM series you want to consider for *SQL Server on Azure VM* sizing. Based on the configuration and performance requirements of your SQL Server or SQL Server instance, the assessment will recommend a VM size from the selected list of VM series. <br/>You can edit settings as needed. For example, if you don't want to include D-series VM, you can exclude D-series from this list.<br/> As Azure SQL assessments are intended to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?view=azuresql#vm-size&preserve-view=true).
+SQL Server on Azure VM sizing | **Storage Type** | Defaulted to *Recommended*, which means the assessment will recommend the best suited Azure Managed Disk based on the chosen environment type, on-premises disk size, IOPS and throughput.
+Azure SQL Database sizing | **Service Tier** | You can choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Database:<br/><br/>Select **Recommended** if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical.<br/><br/>Select **General Purpose** if you want an Azure SQL configuration designed for budget-oriented workloads.<br/><br/>Select **Business Critical** if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers.
+Azure SQL Database sizing | **Instance type** | Defaulted to *Single database*.
+Azure SQL Database sizing | **Purchase model** | Defaulted to *vCore*.
+Azure SQL Database sizing | **Compute tier** | Defaulted to *Provisioned*.
[Review the best practices](best-practices-assessment.md) for creating an assessment with Azure Migrate. ## Calculate readiness > [!NOTE]
-The assessment only includes databases that are in online status. In case the database is in any other status, the assessment ignores the readiness, sizing and cost calculation for such databases. In case you wish you assess such databases, please change the status of the database and recalculate the assessment in some time.
+> The assessment only includes databases that are in online status. In case the database is in any other status, the assessment ignores the readiness, sizing and cost calculation for such databases. > In case you wish you assess such databases, change the status of the database and recalculate the assessment in some time.
### Azure SQL readiness
-Azure SQL readiness for SQL instances and databases is based on a feature compatibility check with Azure SQL Database and Azure SQL Managed Instance:
+Readiness checks for different migration strategies:
+
+#### Recommended deployment, Instances to SQL Server on Azure VM, Instances to Azure SQL MI, Database to Azure SQL DB:
+Azure SQL readiness for SQL instances and databases is based on a feature compatibility check with SQL Server on Azure VM, Azure SQL Database and Azure SQL Managed Instance:
1. The Azure SQL assessment considers the SQL Server instance features that are currently used by the source SQL Server workloads (SQL Agent jobs, linked servers, etc.) and the user databases schemas (tables, views, triggers, stored procedures etc.) to identify compatibility issues.
-1. If there are no compatibility issues found, the readiness is marked as **Ready** for the target deployment type (Azure SQL Database or Azure SQL Managed Instance)
-1. If there are non-critical compatibility issues, such as degraded or unsupported features that do not block the migration to a specific target deployment type, the readiness is marked as **Ready** (hyperlinked and blue information icon) with **warning** details and recommended remediation guidance.
-1. If there are any compatibility issues that may block the migration to a specific target deployment type, the readiness is marked as **Not ready** with **issue** details and recommended remediation guidance.
- - If there is even one database in a SQL instance which is not ready for a particular target deployment type, the instance is marked as **Not ready** for that deployment type.
+1. If there are no compatibility issues found, the readiness is marked as **Ready** for the target deployment type (SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance)
+1. If there are non-critical compatibility issues, such as deprecated or unsupported features that do not block the migration to a specific target deployment type, the readiness is marked as **Ready** (hyperlinked) with **warning** details and recommended remediation guidance.
+1. If there are any compatibility issues that may block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** with **issue** details and recommended remediation guidance.
+ - In the Recommended deployment, Instances to Azure SQL MI, and Instances to SQL Server on Azure VM readiness reports, if there is even one database in an SQL instance, which is not ready for a particular target deployment type, the instance is marked as **Ready with conditions** for that deployment type.
+1. **Not ready**: The assessment could not find a SQL Server on Azure VM/Azure SQL MI/Azure SQL DB configuration meeting the desired configuration and performance characteristics. You can review the recommendation to make the instance/server ready for the desired target deployment type.
1. If the discovery is still in progress or there are any discovery issues for a SQL instance or database, the readiness is marked as **Unknown** as the assessment could not compute the readiness for that SQL instance.
-### Recommended deployment type
-
-If you select the target deployment type as **Recommended** in the Azure SQL assessment properties, Azure Migrate recommends an Azure SQL deployment type that is compatible with your SQL instance. Migrating to a Microsoft-recommended target reduces your overall migration effort.
-
-#### Recommended deployment type based on Azure SQL readiness
+> [!NOTE]
+> In the recommended deployment strategy, migrating instances to SQL Server on Azure VM is the recommended strategy for migrating SQL Server instances. Though, when SQL Server credentials are not available, the Azure SQL assessment provides right-sized lift-and-shift ie "Server to SQL Server on Azure VM" recommendations.
- **Azure SQL DB readiness** | **Azure SQL MI readiness** | **Recommended deployment type** | **Azure SQL configuration and cost estimates calculated?**
- | | | |
- Ready | Ready | Azure SQL DB or <br/>Azure SQL MI | Yes
- Ready | Not ready or<br/> Unknown | Azure SQL DB | Yes
- Not ready or<br/>Unknown | Ready | Azure SQL MI | Yes
- Not ready | Not ready | Potentially ready for Azure VM | No
- Not ready or<br/>Unknown | Not ready or<br/>Unknown | Unknown | No
+#### All servers to SQL Server on Azure VM:
+Refer to readiness [here](concepts-assessment-calculation.md#calculate-readiness).
-> [!NOTE]
-> If the recommended deployment type is selected as **Recommended** in assessment properties and if the source SQL Server is good fit for both Azure SQL DB single database and Azure SQL Managed Instance, the assessment recommends a specific option that optimizes your cost and fits within the size and performance boundaries.
-#### Potentially ready for Azure VM
+### Recommended deployment type
-If the SQL instance is not ready for Azure SQL Database and Azure SQL Managed Instance, the Recommended deployment type is marked as *Potentially ready for Azure VM*.
-- The user is recommended to create an assessment in Azure Migrate with assessment type as "Azure VM" to determine if the server on which the instance is running is ready to migrate to an Azure VM instead. Note that:
- - Azure VM assessments in Azure Migrate are currently lift and shift focused and will not consider the specific performance metrics for running SQL instances and databases on the Azure virtual machine.
- - When you run an Azure VM assessment on a server, the recommended size and cost estimates will be for all instances running on the server and can be migrated to an Azure VM using the Server Migration tool. Before you migrate, [review the performance guidelines](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist) for SQL Server on Azure virtual machines.
+For the recommended deployment migration strategy, the assessment recommends an Azure SQL deployment type that is the most compatible with your SQL instance and is the most cost-effective. Migrating to a Microsoft-recommended target reduces your overall migration effort. If your instance is ready for SQL Server on Azure VM, Azure SQL Managed Instance and Azure SQL Database, the target deployment type, which has the least migration readiness issues and is the most cost-effective is recommended.
+If you select the target deployment type as **Recommended** in the Azure SQL assessment properties, Azure Migrate recommends an Azure SQL deployment type that is compatible with your SQL instance. Migrating to a Microsoft-recommended target reduces your overall migration effort.
+> [!NOTE]
+> In the recommended deployment strategy, if the source SQL Server is good fit for all three deployment targets- SQL Server on Azure VM, Azure SQL Managed Instance and Azure SQL Database, the assessment recommends a specific option that optimizes your cost and fits within the size and performance boundaries.
## Calculate sizing
-### Azure SQL configuration
+### Instances to Azure SQL MI and Databases to Azure SQL DB configuration
After the assessment determines the readiness and the recommended Azure SQL deployment type, it computes a specific service tier and Azure SQL configuration(SKU size) that can meet or exceed the on-premises SQL instance performance: 1. During the discovery process, Azure Migrate collects SQL instance configuration and performance that includes:
After the assessment determines the readiness and the recommended Azure SQL depl
- Database size is calculated by adding all the data and log files. 1. The assessment aggregates all the configuration and performance data and tries to find the best match across various Azure SQL service tiers and configurations, and picks a configuration that can match or exceed the SQL instance performance requirements, optimizing the cost.
+### Instances to SQL Server on Azure VM configuration
+
+*Instance to SQL Server on Azure VM* assessment report covers the ideal approach for migrating SQL Server instances and databases to SQL Server on Azure VM, adhering to the best practices. [Learn more](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?view=azuresql#vm-size&preserve-view=true).
+
+#### Storage sizing
+For storage sizing, the assessment maps each of the instance disk to an Azure disk. Sizing works as follows:
+
+- Assessment adds the read and write IOPS of a disk to get the total IOPS required. Similarly, it adds the read and write throughput values to get the total throughput of each disk. The disk size needed for each of the disks is the size of SQL Data and SQL Log drives.
+
+- The assessment recommends creating a storage disk pool for all SQL Log and SQL Data drives. For temp drives, the assessment recommends storing the files in the local drive.
+
+
+- If the assessment can't find a disk for the required size, IOPS and throughput, it marks the instance as unsuitable for migrating to SQL Server on Azure VM
+- If the assessment finds a set of suitable disks, it selects the disks that support the location specified in the assessment settings.
+- If the environment type is *Production*, the assessment tries to find Premium disks to map each of the disks, else it tries to find a suitable disk, which could either be Premium or Standard SSD disk.
+ - If there are multiple eligible disks, assessment selects the disk with the lowest cost.
+
+#### Compute sizing
+After it calculates storage requirements, the assessment considers CPU and RAM requirements of the instance to find a suitable VM size in Azure.
+- The assessment looks at the effective utilized cores and RAM to find a suitable Azure VM size. *Effective utilized RAM or memory* for an instance is calculated by aggregating the buffer cache (buffer pool size in MB) for all the databases running in an instance.
+- If no suitable size is found, the server is marked as unsuitable for Azure.
+- If a suitable size is found, Azure Migrate applies the storage calculations. It then applies location and pricing-tier settings for the final VM size recommendation.
+- If there are multiple eligible Azure VM sizes, the one with the lowest cost is recommended.
+> [!NOTE]
+>As Azure SQL assessments are intended to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?view=azuresql#vm-size&preserve-view=true).
++
+### Servers to SQL Server on Azure VM configuration
+For *All serves to SQL Server on Azure VM* migration strategy, refer compute and storage sizing [here](concepts-assessment-calculation.md#calculate-sizing-performance-based).
+ ### Confidence ratings Each Azure SQL assessment is associated with a confidence rating. The rating ranges from one (lowest) to five (highest) stars. The confidence rating helps you estimate the reliability of the size recommendations Azure Migrate provides. - The confidence rating is assigned to an assessment. The rating is based on the availability of data points that are needed to compute the assessment.
This table shows the assessment confidence ratings, which depend on the percenta
#### Low confidence ratings Here are a few reasons why an assessment could get a low confidence rating: - You didn't profile your environment for the duration for which you're creating the assessment. For example, if you create the assessment with performance duration set to one day, you must wait at least a day after you start discovery for all the data points to get collected.-- Assessment is not able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, please ensure that:
- - Servers are powered on for the duration of the assessment
- - Outbound connections on ports 443 are allowed
- - If Azure Migrate connection status of the SQL agent in Azure Migrate is 'Connected' and check the last heartbeat
- - If Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance blade
+- The Assessment is not able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, ensure that:
+ - Servers are powered on for the duration of the assessment.
+ - Outbound connections on ports 443 are allowed.
+ - If Azure Migrate connection status of the SQL agent in Azure Migrate is "Connected" and check the last heartbeat.
+ - If Azure Migrate connection status for all SQL instances is "Connected" in the discovered SQL instance blade.
- Please 'Recalculate' the assessment to reflect the latest changes in confidence rating.
-- Some databases or instances were created during the time for which the assessment was calculated. For example, assume you created an assessment for the performance history of the last month, but some databases or instances were created only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low.
+ **Recalculate** the assessment to reflect the latest changes in confidence rating.
+- Some databases or instances were created during the time for which the assessment was calculated. For example, you created an assessment for the performance history of the last month, but some databases or instances were created only a week ago. In this case, the performance data for the new servers will not be available for the entire duration and the confidence rating would be low.
> [!NOTE] > As Azure SQL assessments are performance-based assessments, if the confidence rating of any assessment is less than five stars, we recommend that you wait at least a day for the appliance to profile the environment and then recalculate the assessment. Otherwise, performance-based sizing might be unreliable.
Here are a few reasons why an assessment could get a low confidence rating:
After sizing recommendations are complete, Azure SQL assessment calculates the compute and storage costs for the recommended Azure SQL configurations using an internal pricing API. It aggregates the compute and storage cost across all instances to calculate the total monthly compute cost. ### Compute cost - For calculating compute cost for an Azure SQL configuration, the assessment considers following properties:
- - Azure Hybrid Benefit for SQL licenses
+ - Azure Hybrid Benefit for SQL and Windows licenses
+ - Environment type
- Reserved capacity - Azure target location - Currency
After sizing recommendations are complete, Azure SQL assessment calculates the c
- Discount (%) - Backup storage cost is not included in the assessment. - **Azure SQL Database**
- - A minimum of 5GB storage cost is added in the cost estimate and additional storage cost is added for storage in 1GB increments. [Learn More](https://azure.microsoft.com/pricing/details/sql-database/single/)
+ - A minimum of 5GB storage cost is added in the cost estimate and additional storage cost is added for storage in 1GB increments. [Learn More](https://azure.microsoft.com/pricing/details/sql-database/single/).
- **Azure SQL Managed Instance**
- - There is no storage cost added for the first 32 GB/instance/month storage and additional storage cost is added for storage in 32GB increments. [Learn More](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/)
+ - There is no storage cost added for the first 32 GB/instance/month storage and additional storage cost is added for storage in 32GB increments. [Learn More](https://azure.microsoft.com/pricing/details/azure-sql/sql-managed-instance/single/).
## Next steps - [Review](best-practices-assessment.md) best practices for creating assessments.
migrate How To Create Azure Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-sql-assessment.md
You can create an Azure SQL assessment with sizing criteria as **Performance-bas
## Run an assessment Run an assessment as follows: 1. On the **Overview** page > **Windows, Linux and SQL Server**, click **Assess and migrate servers**.
- :::image type="content" source="./media/tutorial-assess-sql/assess-migrate.png" alt-text="Overview page for Azure Migrate":::
+
+ :::image type="content" source="./media/tutorial-assess-sql/assess-migrate-inline.png" alt-text="Screenshot of Overview page for Azure Migrate." lightbox="./media/tutorial-assess-sql/assess-migrate-expanded.png":::
+ 2. On **Azure Migrate: Discovery and assessment**, click **Assess** and choose the assessment type as **Azure SQL**.
- :::image type="content" source="./media/tutorial-assess-sql/assess.png" alt-text="Dropdown to choose assessment type as Azure SQL":::
-3. In **Assess servers** > you will be able to see the assessment type pre-selected as **Azure SQL** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
+
+ :::image type="content" source="./media/tutorial-assess-sql/assess-inline.png" alt-text="Screenshot of Dropdown to choose assessment type as Azure SQL." lightbox="./media/tutorial-assess-sql/assess-expanded.png":::
+
+3. In **Assess servers**, you will be able to see the assessment type pre-selected as **Azure SQL** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
4. Click **Edit** to review the assessment properties.
- :::image type="content" source="./media/tutorial-assess-sql/assess-servers-sql.png" alt-text="Edit button from where assessment properties can be customized":::
+
+ :::image type="content" source="./media/tutorial-assess-sql/assess-servers-sql-inline.png" alt-text="Screenshot of Edit button from where assessment settings can be customized." lightbox="./media/tutorial-assess-sql/assess-servers-sql-expanded.png":::
+ 5. In Assessment properties > **Target Properties**: - In **Target location**, specify the Azure region to which you want to migrate. - Azure SQL configuration and cost recommendations are based on the location that you specify.
Run an assessment as follows:
- If you select a reserved capacity option, you can't specify ΓÇ£Discount (%)ΓÇ¥. 6. In Assessment properties > **Assessment criteria**:
- - The Sizing criteria is defaulted to **Performance-based** which means Azure migrate will collect performance metrics pertaining to SQL instances and the databases managed by it to recommend an optimal-sized Azure SQL Database and/or Azure SQL Managed Instance configuration. You can specify:
+ - The Sizing criteria is defaulted to **Performance-based** which means Azure migrate will collect performance metrics pertaining to SQL instances and the databases managed by it to recommend an optimal-sized Azure SQL Database and/or SQL Managed Instance configuration. You can specify:
- **Performance history** to indicate the data duration on which you want to base the assessment. (Default is one day) - **Percentile utilization**, to indicate the percentile value you want to use for the performance sample. (Default is 95th percentile) - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two:
Run an assessment as follows:
- In **Offer/Licensing program**, specify the Azure offer if you're enrolled. Currently you can only choose from Pay-as-you-go and Pay-as-you-go Dev/Test. - You can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer. - You can apply Azure Hybrid Benefit on top of Pay-as-you-go Dev/Test. The assessment currently does not support applying Reserved Capacity on top of Pay-as-you-go Dev/Test offer.
- - In **Service Tier**, choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Database and/or Azure SQL Managed Instance:
+ - In **Service Tier**, choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Database and/or SQL Managed Instance:
- Select **Recommended** if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical. Learn More - Select **General Purpose** if you want an Azure SQL configuration designed for budget-oriented workloads. - Select **Business Critical** if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers.
Run an assessment as follows:
- In **Currency**, select the billing currency for your account. - In **Azure Hybrid Benefit**, specify whether you already have a SQL Server license. If you do and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure. - Click Save if you make changes.
- :::image type="content" source="./media/tutorial-assess-sql/view-all.png" alt-text="Save button on assessment properties":::
+
+ :::image type="content" source="./media/tutorial-assess-sql/view-all-inline.png" alt-text="Screenshot to save the assessment properties." lightbox="./media/tutorial-assess-sql/view-all-expanded.png":::
+ 8. In **Assess Servers** > click Next. 9. In **Select servers to assess** > **Assessment name** > specify a name for the assessment. 10. In **Select or create a group** > select **Create New** and specify a group name.
- :::image type="content" source="./media/tutorial-assess-sql/assessment-add-servers.png" alt-text="Location of New group button":::
+
+ :::image type="content" source="./media/tutorial-assess-sql/assessment-add-servers-inline.png" alt-text="Screenshot of Location of New group button." lightbox="./media/tutorial-assess-sql/assessment-add-servers-expanded.png":::
+ 11. Select the appliance, and select the servers you want to add to the group. Then click Next. 12. In **Review + create assessment**, review the assessment details, and click Create Assessment to create the group and run the assessment. :::image type="content" source="./media/tutorial-assess-sql/assessment-create.png" alt-text="Location of Review and create assessment button.":::
Run an assessment as follows:
1. **Windows, Linux and SQL Server** > **Azure Migrate: Discovery and assessment** > Click on the number next to Azure SQL assessment. 2. Click on the assessment name which you wish to view. As an example(estimations and costs for example only):
- :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-summary.png" alt-text="SQL assessment overview":::
+
+ :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-summary-inline.png" alt-text="Screenshot of Overview of SQL assessment." lightbox="./media/tutorial-assess-sql/assessment-sql-summary-expanded.png":::
+ 3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment. #### Discovered items
This indicates the distribution of assessed SQL instances:
**Target deployment type (in assessment properties)** | **Readiness** | |
-**Recommended** | Ready for Azure SQL Database, Ready for Azure SQL Managed Instance, Potentially ready for Azure VM, Readiness unknown (In case the discovery is in progress or there are some discovery issues to be fixed)
-**Azure SQL DB** or **Azure SQL MI** | Ready for Azure SQL Database or Azure SQL Managed Instance, Not ready for Azure SQL Database or Azure SQL Managed Instance, Readiness unknown (In case the discovery is in progress or there are some discovery issues to be fixed)
+**Recommended** | Ready for Azure SQL Database, Ready for SQL Managed Instance, Potentially ready for Azure VM, Readiness unknown (In case the discovery is in progress or there are some discovery issues to be fixed)
+**Azure SQL DB** or **Azure SQL MI** | Ready for Azure SQL Database or SQL Managed Instance, Not ready for Azure SQL Database or SQL Managed Instance, Readiness unknown (In case the discovery is in progress or there are some discovery issues to be fixed)
-You can drill-down to understand details around migration issues/warnings that you can remediate before migration to Azure SQL. [Learn More](concepts-azure-sql-assessment-calculation.md)
+You can drill down to understand details around migration issues/warnings that you can remediate before migration to Azure SQL. [Learn More](concepts-azure-sql-assessment-calculation.md)
You can also review the recommended Azure SQL configurations for migrating to Azure SQL databases and/or Managed Instances. #### Azure SQL Database and Managed Instance cost details
-The monthly cost estimate includes compute and storage costs for Azure SQL configurations corresponding to the recommended Azure SQL Database and/or Azure SQL Managed Instance deployment type. [Learn More](concepts-azure-sql-assessment-calculation.md#calculate-monthly-costs)
+The monthly cost estimate includes compute and storage costs for Azure SQL configurations corresponding to the recommended Azure SQL Database and/or SQL Managed Instance deployment type. [Learn More](concepts-azure-sql-assessment-calculation.md#calculate-monthly-costs)
### Review readiness 1. Click **Azure SQL readiness**.
- :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-readiness.png" alt-text="Azure SQL readiness details":::
+
+ :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-readiness-inline.png" alt-text="Screenshot with Details of Azure SQL readiness" lightbox="./media/tutorial-assess-sql/assessment-sql-readiness-expanded.png":::
+ 1. In Azure SQL readiness, review the **Azure SQL DB readiness** and **Azure SQL MI readiness** for the assessed SQL instances: - **Ready**: The instance is ready to be migrated to Azure SQL DB/MI without any migration issues or warnings. - Ready(hyperlinked and blue information icon): The instance is ready to be migrated to Azure SQL DB/MI without any migration issues but has some migration warnings that you need to review. You can click on the hyperlink to review the migration warnings and the recommended remediation guidance:
The monthly cost estimate includes compute and storage costs for Azure SQL confi
Ready | Ready | Azure SQL DB or Azure SQL MI [Learn more](concepts-azure-sql-assessment-calculation.md#recommended-deployment-type) | Yes Ready | Not ready or Unknown | Azure SQL DB | Yes Not ready or Unknown | Ready | Azure SQL MI | Yes
- Not ready | Not ready | Potentially ready for Azure VM [Learn more](concepts-azure-sql-assessment-calculation.md#potentially-ready-for-azure-vm) | No
+ Not ready | Not ready | Potentially ready for Azure VM [Learn more](concepts-azure-sql-assessment-calculation.md#calculate-readiness) | No
Not ready or Unknown | Not ready or Unknown | Unknown | No - **Target deployment type** (as selected in assessment properties): **Azure SQL DB**
The monthly cost estimate includes compute and storage costs for Azure SQL confi
Not ready | No Unknown | No
-4. Click on the instance name drill down to see the number of user databases, instance details including instance properties, compute (scoped to instance) and source database storage details.
+4. Click on the instance name and drill down to see the number of user databases, instance details including instance properties, compute (scoped to instance) and source database storage details.
5. Click on the number of user databases to review the list of databases and their details. As an example(estimations and costs for example only): :::image type="content" source="./media/tutorial-assess-sql/assessment-db.png" alt-text="SQL instance detail"::: 5. Click on review details in the Migration issues column to review the migration issues and warnings for a particular target deployment type.
The assessment summary shows the estimated monthly compute and storage costs for
- Cost estimates are based on the recommended Azure SQL configuration for an instance. - Estimated monthly costs for compute and storage are shown. As an example(estimations and costs for example only):
- :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-cost.png" alt-text="Cost details":::
+ :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-cost-inline.png" alt-text="Screenshot of cost details." lightbox="./media/tutorial-assess-sql/assessment-sql-cost-expanded.png":::
1. You can drill down at an instance level to see Azure SQL configuration and cost estimates at an instance level. 1. You can also drill down to the database list to review the Azure SQL configuration and cost estimates per database when an Azure SQL Database configuration is recommended.
migrate Migrate Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-services-overview.md
Azure Migrate provides a centralized hub to assess and migrate on-premises serve
- **Unified migration platform**: A single portal to start, run, and track your migration to Azure. - **Range of tools**: A range of tools for assessment and migration. Azure Migrate tools include Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration. Azure Migrate also integrates with other Azure services and tools, and with independent software vendor (ISV) offerings. - **Assessment and migration**: In the Azure Migrate hub, you can assess and migrate:
- - **Servers, databases, and web apps**: Assess on-premises servers including web apps and SQL Server instances and migrate them to Azure virtual machines or Azure VMware Solution (AVS) (Preview).
- - **Databases**: Assess on-premises databases and migrate them to Azure SQL Database or to SQL Managed Instance.
+ - **Servers, databases and web apps**: Assess on-premises servers including web apps and SQL Server instances and migrate them to Azure virtual machines or Azure VMware Solution (AVS) (Preview).
+ - **Databases**: Assess on-premises SQL Server instances and databases to migrate them to an SQL Server on an Azure VM or an Azure SQL Managed Instance or to an Azure SQL Database.
- **Web applications**: Assess on-premises web applications and migrate them to Azure App Service. - **Virtual desktops**: Assess your on-premises virtual desktop infrastructure (VDI) and migrate it to Azure Virtual Desktop. - **Data**: Migrate large amounts of data to Azure quickly and cost-effectively using Azure Data Box products.
In the Azure Migrate hub, you select the tool you want to use for assessment or
## Movere
-Movere is a software as a service (SaaS) platform. It increases business intelligence by accurately presenting entire IT environments within a single day. Organizations and enterprises grow, change, and digitally optimize. As they do so, Movere provides them with the needed confidence to see and control their environments, whatever the platform, application, or geography.
+Movere is a Software as a Service (SaaS) platform. It increases business intelligence by accurately presenting entire IT environments within a single day. Organizations and enterprises grow, change, and digitally optimize. As they do so, Movere provides them with the needed confidence to see and control their environments, whatever the platform, application, or geography.
Microsoft [acquired](https://azure.microsoft.com/blog/microsoft-acquires-movere-to-help-customers-unlock-cloud-innovation-with-seamless-migration-tools/) Movere, and it's no longer sold as a standalone offer. Movere is available through Microsoft Solution Assessment and Microsoft Cloud Economics Program. [Learn more](https://www.movere.io) about Movere.
migrate Troubleshoot Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-assessment.md
Your assessment was created with an Azure region that has been deprecated and he
When you select **Reserved Instances**, the **Discount (%)** and **VM uptime** properties aren't applicable. As your assessment was created with an invalid combination of these properties, the **Edit** and **Recalculate** buttons are disabled. Create a new assessment. [Learn more](./concepts-assessment-calculation.md#whats-an-assessment).
+## Why are some of my assessments marked as "to be upgraded to latest assessment version"?
+
+Recalculate your assessment to view the upgraded Azure SQL assessment experience to identify the ideal migration target for your SQL deployments across Azure SQL Managed Instances, SQL Server on Azure VM, and Azure SQL DB:
+ - We recommended migrating instances to *SQL Server on Azure VM* as per the Azure best practices.
+ - *Right sized Lift and Shift* - Server to *SQL Server on Azure VM*. We recommend this when SQL Server credentials are not available.
+ - Enhanced user-experience that covers readiness and cost estimates for multiple migration targets for SQL deployments in one assessment.
+
+We recommend that you export your existing assessment before recalculating.
+ ## I don't see performance data for some network adapters on my physical servers This issue can happen if the physical server has Hyper-V virtualization enabled. On these servers, because of a product gap, Azure Migrate currently discovers both the physical and virtual network adapters. The network throughput is captured only on the virtual network adapters discovered.
migrate Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-network-connectivity.md
To validate the private link connection, perform a DNS resolution of the Azure M
An illustrative example for DNS resolution of the storage account private link FQDN. -- Enter ```nslookup_<storage-account-name>_.blob.core.windows.net.``` Replace ```<storage-account-name>``` with the name of the storage account used for Azure Migrate.
+- Enter ```nslookup <storage-account-name>_.blob.core.windows.net.``` Replace ```<storage-account-name>``` with the name of the storage account used for Azure Migrate.
You'll receive a message like this:
migrate Tutorial Assess Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-sql.md
Title: Tutorial to assess SQL instances for migration to Azure SQL Managed Instance and Azure SQL Database
+ Title: Tutorial to assess SQL instances for migration to SQL Server on Azure VM, Azure SQL Managed Instance and Azure SQL Database
description: Learn how to create assessment for Azure SQL in Azure Migrate Previously updated : 02/07/2021 Last updated : 05/05/2022
Last updated 02/07/2021
# Tutorial: Assess SQL instances for migration to Azure SQL As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity.
-This article shows you how to assess discovered SQL Server instances databases in preparation for migration to Azure SQL, using the Azure Migrate: Discovery and assessment tool.
+This article shows you how to assess discovered SQL Server instances and databases in preparation for migration to Azure SQL, using the Azure Migrate: Discovery and assessment tool.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Run an assessment based on server configuration and performance data.
-> * Review an Azure SQL assessment
+> * Run an assessment based on configuration and performance data.
+> * Review an Azure SQL assessment.
> [!NOTE] > Tutorials show the quickest path for trying out a scenario, and use default options where possible. > [!NOTE]
-> If SQL Servers are running on non-VMware
-platforms. [Assess the readiness of a SQL
-Server data estate migrating to Azure SQL
-Database using the Data Migration Assistant](/sql/dma/dma-assess-sql-data-estate-to-sqldb).
+> If SQL Servers are running on non-VMware platforms, [assess the readiness of a SQL Server data estate migrating to Azure SQL Database using the Data Migration Assistant](/sql/dma/dma-assess-sql-data-estate-to-sqldb).
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. -- Before you follow this tutorial to assess your SQL Server instances for migration to Azure SQL, make sure you've discovered the SQL instances you want to assess using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-vmware.md)-- If you want to try out this feature in an existing project, please ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
+- Before you follow this tutorial to assess your SQL Server instances for migration to Azure SQL, make sure you've discovered the SQL instances you want to assess using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-vmware.md).
+- If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
## Run an assessment Run an assessment as follows:
-1. On the **Overview** page > **Windows, Linux and SQL Server**, click **Assess and migrate servers**.
- :::image type="content" source="./media/tutorial-assess-sql/assess-migrate.png" alt-text="Overview page for Azure Migrate":::
-2. On **Azure Migrate: Discovery and assessment**, click **Assess** and choose the assessment type as **Azure SQL**.
- :::image type="content" source="./media/tutorial-assess-sql/assess.png" alt-text="Dropdown to choose assessment type as Azure SQL":::
-3. In **Assess servers** > you will be able to see the assessment type pre-selected as **Azure SQL** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
-
-4. Click **Edit** to review the assessment properties.
- :::image type="content" source="./media/tutorial-assess-sql/assess-servers-sql.png" alt-text="Edit button from where assessment properties can be customized":::
-5. In Assessment properties > **Target Properties**:
+1. On the **Overview** page > **Servers, databases and web apps**, select **Assess and migrate servers**.
+
+ :::image type="content" source="./media/tutorial-assess-sql/assess-migrate-inline.png" alt-text="Screenshot of Overview page for Azure Migrate." lightbox="./media/tutorial-assess-sql/assess-migrate-expanded.png":::
+
+1. In **Azure Migrate: Discovery and assessment**, select **Assess** and choose the assessment type as **Azure SQL**.
+
+ :::image type="content" source="./media/tutorial-assess-sql/assess-inline.png" alt-text="Screenshot of Dropdown to choose assessment type as Azure SQL." lightbox="./media/tutorial-assess-sql/assess-expanded.png":::
+
+1. In **Assess servers**, the assessment type is pre-selected as **Azure SQL** and the discovery source is defaulted to **Servers discovered from Azure Migrate appliance**.
+
+1. Select **Edit** to review the assessment settings.
+ :::image type="content" source="./media/tutorial-assess-sql/assess-servers-sql-inline.png" alt-text="Screenshot of Edit button from where assessment settings can be customized." lightbox="./media/tutorial-assess-sql/assess-servers-sql-expanded.png":::
+1. In **Assessment settings** > **Target and pricing settings**, do the following:
- In **Target location**, specify the Azure region to which you want to migrate. - Azure SQL configuration and cost recommendations are based on the location that you specify.
- - In **Target deployment type**,
- - Select **Recommended**, if you want Azure Migrate to assess the readiness of your SQL instances for migrating to Azure SQL MI and Azure SQL DB, and recommend the best suited target deployment option, target tier, Azure SQL configuration and monthly estimates. [Learn More](concepts-azure-sql-assessment-calculation.md)
- - Select **Azure SQL DB**, if you want to assess the readiness of your SQL instances for migrating to Azure SQL Databases only and review the target tier, Azure SQL configuration and monthly estimates.
- - Select **Azure SQL MI**, if you want to assess the readiness of your SQL instances for migrating to Azure SQL Managed Instance only and review the target tier, Azure SQL configuration and monthly estimates.
+ - In **Environment type**, specify the environment for the SQL deployments to apply pricing applicable to Production or Dev/Test.
+ - In **Offer/Licensing program**, specify the Azure offer if you're enrolled. Currently the field is defaulted to Pay-as-you-go, which will give you retail Azure prices.
+ - You can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer.
+ - You can apply Azure Hybrid Benefit on top of the Pay-as-you-go offer and Dev/Test environment. The assessment does not support applying Reserved Capacity on top of the Pay-as-you-go offer and Dev/Test environment.
+ - If the offer is set to *Pay-as-you-go* and Reserved capacity is set to *No reserved instances*, the monthly cost estimates are calculated by multiplying the number of hours chosen in the VM uptime field with the hourly price of the recommended SKU.
- In **Reserved Capacity**, specify whether you want to use reserved capacity for the SQL server after migration.
- - If you select a reserved capacity option, you can't specify ΓÇ£Discount (%)ΓÇ¥.
-
-6. In Assessment properties > **Assessment criteria**:
- - The Sizing criteria is defaulted to **Performance-based** which means Azure migrate will collect performance metrics pertaining to SQL instances and the databases managed by it to recommend an optimal-sized Azure SQL Database and/or Azure SQL Managed Instance configuration. You can specify:
- - **Performance history** to indicate the data duration on which you want to base the assessment. (Default is one day)
- - **Percentile utilization**, to indicate the percentile value you want to use for the performance sample. (Default is 95th percentile)
- - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two:
+ - If you select a reserved capacity option, you can't specify "Discount (%)" or "VM uptime".
+ - If the Reserved capacity is set to *1 year reserved* or *3 years reserved*, the monthly cost estimates are calculated by multiplying 744 hours in the VM uptime field with the hourly price of the recommended SKU.
+ - In **Currency**, select the billing currency for your account.
+ - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
+ - In **VM uptime**, specify the duration (days per month/hour per day) that servers/VMs will run. This is useful for computing cost estimates for SQL Server on Azure VM where you are aware that Azure VMs might not run continuously.
+ - Cost estimates for servers where recommended target is *SQL Server on Azure VM* are based on the duration specified.
+ - Default is 31 days per month/24 hours per day.
+ - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server and/or an SQL Server license. Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance-enabled Windows Server and SQL Server licenses on Azure. For example, if you have an SQL Server license and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
+
+1. In **Assessment settings** > **Assessment criteria**,
+ - The **Sizing criteria** is defaulted to *Performance-based*, which means Azure migrate will collect performance metrics pertaining to SQL instances and the databases managed by it to recommend an optimal-sized SQL Server on Azure VM and/or Azure SQL Database and/or Azure SQL Managed Instance configuration. You can specify:
+ - **Performance history** to indicate the data duration on which you want to base the assessment. (Default is one day.)
+ - **Percentile utilization**, to indicate the percentile value you want to use for the performance sample. (Default is 95th percentile.)
+ - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues such as seasonal usage, short performance history, and likely increases in future usage. For example, the following table displays values if you use a comfort factor of two:
**Component** | **Effective utilization** | **Add comfort factor (2.0)** | | Cores | 2 | 4 Memory | 8 GB | 16 GB
-
-7. In **Pricing**:
- - In **Offer/Licensing program**, specify the Azure offer if you're enrolled. Currently you can only choose from Pay-as-you-go and Pay-as-you-go Dev/Test.
- - You can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer.
- - You can apply Azure Hybrid Benefit on top of Pay-as-you-go Dev/Test. The assessment currently does not support applying Reserved Capacity on top of Pay-as-you-go Dev/Test offer.
- - In **Service Tier**, choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Database and/or Azure SQL Managed Instance:
- - Select **Recommended** if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical. Learn More
+
+1. In **Assessment settings** > **Azure SQL Managed Instance sizing**,
+ - In **Service Tier**, choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Managed Instance:
+ - Select *Recommended* if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical.
+ - Select *General Purpose* if you want an Azure SQL configuration designed for budget-oriented workloads.
+ - Select *Business Critical* if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers.
+ - **Instance type** - Default value is *Single instance*.
+1. In **Assessment settings** > **SQL Server on Azure VM sizing**:
+ - **Pricing Tier** - Default value is *Standard*.
+ - In **VM series**, specify the Azure VM series you want to consider for *SQL Server on Azure VM* sizing. Based on the configuration and performance requirements of your SQL Server or SQL Server instance, the assessment will recommend a VM size from the selected list of VM series.
+ - You can edit settings as needed. For example, if you don't want to include D-series VM, you can exclude D-series from this list.
+ > [!NOTE]
+ > As Azure SQL assessments are intended to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?view=azuresql#vm-size&preserve-view=true).
+ - **Storage Type** is defaulted to *Recommended*, which means the assessment will recommend the best suited Azure Managed Disk based on the chosen environment type, on-premises disk size, IOPS, and throughput.
+
+1. In **Assessment settings** > **Azure SQL Database sizing**,
+ - In **Service Tier**, choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Database.
+ - Select **Recommended** if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical.
- Select **General Purpose** if you want an Azure SQL configuration designed for budget-oriented workloads. - Select **Business Critical** if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers.
- - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
- - In **Currency**, select the billing currency for your account.
- - In **Azure Hybrid Benefit**, specify whether you already have a SQL Server license. If you do and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
- - Click Save if you make changes.
- :::image type="content" source="./media/tutorial-assess-sql/view-all.png" alt-text="Save button on assessment properties":::
-8. In **Assess Servers** > click Next.
+ - **Instance type** - Default value is *Single database*.
+ - **Purchase model** - Default value is *vCore*.
+ - **Compute tier** - Default value is *Provisioned*.
+
+ - Select **Save** if you made changes.
+
+ :::image type="content" source="./media/tutorial-assess-sql/view-all-inline.png" alt-text="Screenshot to save the assessment properties." lightbox="./media/tutorial-assess-sql/view-all-expanded.png":::
+
+8. In **Assess Servers**, select **Next**.
9. In **Select servers to assess** > **Assessment name** > specify a name for the assessment. 10. In **Select or create a group** > select **Create New** and specify a group name.
- :::image type="content" source="./media/tutorial-assess-sql/assessment-add-servers.png" alt-text="Location of New group button":::
-11. Select the appliance, and select the servers you want to add to the group. Then click Next.
-12. In **Review + create assessment**, review the assessment details, and click Create Assessment to create the group and run the assessment.
- :::image type="content" source="./media/tutorial-assess-sql/assessment-create.png" alt-text="Location of Review and create assessment button.":::
-13. After the assessment is created, go to **Windows, Linux and SQL Server** > **Azure Migrate: Discovery and assessment** tile > Click on the number next to Azure SQL assessment.
- :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-navigation.png" alt-text="Navigation to created assessment":::
-15. Click on the assessment name which you wish to view.
+
+ :::image type="content" source="./media/tutorial-assess-sql/assessment-add-servers-inline.png" alt-text="Screenshot of Location of New group button." lightbox="./media/tutorial-assess-sql/assessment-add-servers-expanded.png":::
+
+11. Select the appliance and select the servers you want to add to the group and select **Next**.
+12. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
+13. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to Azure SQL assessment. If you do not see the number populated, select **Refresh** to get the latest updates.
+
+ :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-navigation.png" alt-text="Screenshot of Navigation to created assessment.":::
+
+15. Select the assessment name, which you wish to view.
> [!NOTE] > As Azure SQL assessments are performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. If your discovery is still in progress, the readiness of your SQL instances will be marked as **Unknown**. Ideally, after you start discovery, **wait for the performance duration you specify (day/week/month)** to create or recalculate the assessment for a high-confidence rating.
Run an assessment as follows:
**To view an assessment**:
-1. **Windows, Linux and SQL Server** > **Azure Migrate: Discovery and assessment** > Click on the number next to Azure SQL assessment.
-2. Click on the assessment name which you wish to view. As an example(estimations and costs for example only):
- :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-summary.png" alt-text="SQL assessment overview":::
-3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment.
+1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to Azure SQL assessment.
+2. Select the assessment name, which you wish to view. As an example(estimations and costs, for example, only):
+
+ :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-summary-inline.png" alt-text="Screenshot of Overview of SQL assessment." lightbox="./media/tutorial-assess-sql/assessment-sql-summary-expanded.png":::
-#### Discovered items
+3. Review the assessment summary. You can also edit the assessment settings or recalculate the assessment.
-This indicates the number of SQL servers, instances and databases that were assessed in this assessment.
-
-#### Azure readiness
+### Discovered entities
-This indicates the distribution of assessed SQL instances:
-
-**Target deployment type (in assessment properties)** | **Readiness**
- | |
-**Recommended** | Ready for Azure SQL Database, Ready for Azure SQL Managed Instance, Potentially ready for Azure VM, Readiness unknown (In case the discovery is in progress or there are some discovery issues to be fixed)
-**Azure SQL DB** or **Azure SQL MI** | Ready for Azure SQL Database or Azure SQL Managed Instance, Not ready for Azure SQL Database or Azure SQL Managed Instance, Readiness unknown (In case the discovery is in progress or there are some discovery issues to be fixed)
-
-You can drill-down to understand details around migration issues/warnings that you can remediate before migration to Azure SQL. [Learn More](concepts-azure-sql-assessment-calculation.md)
-You can also review the recommended Azure SQL configurations for migrating to Azure SQL databases and/or Managed Instances.
+This indicates the number of SQL servers, instances, and databases that were assessed in this assessment.
-#### Azure SQL Database and Managed Instance cost details
+### SQL Server migration scenarios
+
+This indicates the different migration strategies that you can consider for your SQL deployments. You can review the readiness for target deployment types and the cost estimates for SQL Servers/Instances/Databases that are marked ready or ready with conditions:
+
+1. **Recommended deployment**:
+This is a strategy where an Azure SQL deployment type that is the most compatible with your SQL instance. It is the most cost-effective and is recommended. Migrating to a Microsoft-recommended target reduces your overall migration effort. If your instance is ready for SQL Server on Azure VM, Azure SQL Managed Instance and Azure SQL Database, the target deployment type, which has the least migration readiness issues and is the most cost-effective is recommended.
+You can see the SQL Server instance readiness for different recommended deployment targets and monthly cost estimates for SQL instances marked *Ready* and *Ready with conditions*.
+
+ - You can go to the Readiness report to:
+ - Review the recommended Azure SQL configurations for migrating to SQL Server on Azure VM and/or Azure SQL databases and/or Azure SQL Managed Instances.
+ - Understand details around migration issues/warnings that you can remediate before migration to the different Azure SQL targets. [Learn More](concepts-azure-sql-assessment-calculation.md).
+ - You can go to the cost estimates report to review cost of each of the SQL instance after migrating to the recommended deployment target.
+
+ > [!NOTE]
+ > In the recommended deployment strategy, migrating instances to SQL Server on Azure VM is the recommended strategy for migrating SQL Server instances. When the SQL Server credentials are not available, the Azure SQL assessment provides right-sized lift-and-shift, that is, *Server to SQL Server on Azure VM* recommendations.
-The monthly cost estimate includes compute and storage costs for Azure SQL configurations corresponding to the recommended Azure SQL Database and/or Azure SQL Managed Instance deployment type. [Learn More](concepts-azure-sql-assessment-calculation.md#calculate-monthly-costs)
+1. **Migrate all instances to Azure SQL MI**:
+In this strategy, you can see the readiness and cost estimates for migrating all SQL Server instances to Azure SQL Managed Instance.
+
+1. **Migrate all instances to SQL Server on Azure VM**:
+In this strategy, you can see the readiness and cost estimates for migrating all SQL Server instances to SQL Server on Azure VM.
+
+1. **Migrate all servers to SQL Server on Azure VM**:
+In this strategy, you can see how you can rehost the servers running SQL Server to SQL Server on Azure VM and review the readiness and cost estimates.
+Even when SQL Server credentials are not available, this report will provide right-sized lift-and-shift, that is, "Server to SQL Server on Azure VM" recommendations. The readiness and sizing logic is similar to Azure VM assessment type.
+
+1. **Migrate all SQL databases to Azure SQL Database**
+In this strategy, you can see how you can migrate individual databases to Azure SQL Database and review the readiness and cost estimates.
### Review readiness
+You can review readiness reports for different migration strategies:
-1. Click **Azure SQL readiness**.
- :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-readiness.png" alt-text="Azure SQL readiness details":::
-1. In Azure SQL readiness, review the **Azure SQL DB readiness** and **Azure SQL MI readiness** for the assessed SQL instances:
- - **Ready**: The instance is ready to be migrated to Azure SQL DB/MI without any migration issues or warnings.
- - Ready(hyperlinked and blue information icon): The instance is ready to be migrated to Azure SQL DB/MI without any migration issues but has some migration warnings that you need to review. You can click on the hyperlink to review the migration warnings and the recommended remediation guidance:
- :::image type="content" source="./media/tutorial-assess-sql/assess-ready.png" alt-text="Assessment with ready status":::
- - **Not ready**: The instance has one or more migration issues for migrating to Azure SQL DB/MI. You can click on the hyperlink and review the migration issues and the recommended remediation guidance.
- - **Unknown**: Azure Migrate can't assess readiness, because the discovery is in progress or there are issues during discovery that need to be fixed from the notifications blade. If the issue persists, please contact Microsoft support.
-1. Review the recommended deployment type for the SQL instance which is determined as per the matrix below:
-
- - **Target deployment type** (as selected in assessment properties): **Recommended**
-
- **Azure SQL DB readiness** | **Azure SQL MI readiness** | **Recommended deployment type** | **Azure SQL configuration and cost estimates calculated?**
- | | | |
- Ready | Ready | Azure SQL DB or Azure SQL MI [Learn more](concepts-azure-sql-assessment-calculation.md#recommended-deployment-type) | Yes
- Ready | Not ready or Unknown | Azure SQL DB | Yes
- Not ready or Unknown | Ready | Azure SQL MI | Yes
- Not ready | Not ready | Potentially ready for Azure VM [Learn more](concepts-azure-sql-assessment-calculation.md#potentially-ready-for-azure-vm) | No
- Not ready or Unknown | Not ready or Unknown | Unknown | No
-
- - **Target deployment type** (as selected in assessment properties): **Azure SQL DB**
-
- **Azure SQL DB readiness** | **Azure SQL configuration and cost estimates calculated?**
- | |
- Ready | Yes
- Not ready | No
- Unknown | No
-
- - **Target deployment type** (as selected in assessment properties): **Azure SQL MI**
+1. Select the **Readiness** report for any of the migration strategies.
+
+ :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-readiness-inline.png" alt-text="Screenshot with Details of Azure SQL readiness" lightbox="./media/tutorial-assess-sql/assessment-sql-readiness-expanded.png":::
+
+1. Review the readiness columns in the respective reports:
- **Azure SQL MI readiness** | **Azure SQL configuration and cost estimates calculated?**
- | |
- Ready | Yes
- Not ready | No
- Unknown | No
-
-4. Click on the instance name drill down to see the number of user databases, instance details including instance properties, compute (scoped to instance) and source database storage details.
-5. Click on the number of user databases to review the list of databases and their details. As an example(estimations and costs for example only):
- :::image type="content" source="./media/tutorial-assess-sql/assessment-db.png" alt-text="SQL instance detail":::
-5. Click on review details in the Migration issues column to review the migration issues and warnings for a particular target deployment type.
- :::image type="content" source="./media/tutorial-assess-sql/assessment-db-issues.png" alt-text="DB migration issues and warnings":::
+ **Migration strategy** | **Readiness Columns (Respective deployment target)**
+ |
+ Recommended | MI readiness (Azure SQL MI), VM readiness (SQL Server on Azure VM), DB readiness (Azure SQL DB).
+ Instances to Azure SQL MI | MI readiness (Azure SQL Managed Instance)
+ Instances to SQL Server on Azure VM | VM readiness (SQL Server on Azure VM).
+ Servers to SQL Server on Azure VM | Azure VM readiness (SQL Server on Azure VM).
+ Databases to Azure SQL DB | DB readiness (Azure SQL Database)
+
+1. Review the readiness for the assessed SQL instances/SQL Servers/Databases:
+ - **Ready**: The instance/server is ready to be migrated to SQL Server on Azure VM/Azure SQL MI/Azure SQL DB without any migration issues or warnings.
+ - Ready: The instance is ready to be migrated to Azure VM/Azure SQL MI/Azure SQL DB without any migration issues but has some migration warnings that you need to review. You can select the hyperlink to review the migration warnings and the recommended remediation guidance.
+ - **Ready with conditions**: The instance/server has one or more migration issues for migrating to Azure VM/Azure SQL MI/Azure SQL DB. You can select on the hyperlink and review the migration issues and the recommended remediation guidance.
+ - **Not ready**: The assessment could not find a SQL Server on Azure VM/Azure SQL MI/Azure SQL DB configuration meeting the desired configuration and performance characteristics. Select the hyperlink to review the recommendation to make the instance/server ready for the desired target deployment type.
+ - **Unknown**: Azure Migrate can't assess readiness, because the discovery is in progress or there are issues during discovery that need to be fixed from the notifications blade. If the issue persists, contact [Microsoft support](https://support.microsoft.com).
+
+1. Select the instance name and drill-down to see the number of user databases, instance details including instance properties, compute (scoped to instance) and source database storage details.
+1. Click the number of user databases to review the list of databases and their details.
+1. Click review details in the **Migration issues** column to review the migration issues and warnings for a particular target deployment type.
### Review cost estimates
-The assessment summary shows the estimated monthly compute and storage costs for Azure SQL configurations corresponding to the recommended Azure SQL databases and/or Managed Instances deployment type.
+The assessment summary shows the estimated monthly compute and storage costs for Azure SQL configurations corresponding to the recommended SQL Server on Azure VM and/or Azure SQL Managed Instances and/or Azure SQL Database deployment type.
1. Review the monthly total costs. Costs are aggregated for all SQL instances in the assessed group.
- - Cost estimates are based on the recommended Azure SQL configuration for an instance.
- - Estimated monthly costs for compute and storage are shown. As an example(estimations and costs for example only):
+ - Cost estimates are based on the recommended Azure SQL configuration for an instance/server/database.
+ - Estimated total(compute and storage) monthly costs are displayed. As an example:
- :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-cost.png" alt-text="Cost details":::
+ :::image type="content" source="./media/tutorial-assess-sql/assessment-sql-cost-inline.png" alt-text="Screenshot of cost details." lightbox="./media/tutorial-assess-sql/assessment-sql-cost-expanded.png":::
+ - The compute and storage costs are split in the individual cost estimates reports and at instance/server/database level.
1. You can drill down at an instance level to see Azure SQL configuration and cost estimates at an instance level. 1. You can also drill down to the database list to review the Azure SQL configuration and cost estimates per database when an Azure SQL Database configuration is recommended.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (May 2022)
+- Upgraded the Azure SQL assessment experience to identify the ideal migration target for your SQL deployments across Azure SQL MI, SQL Server on Azure VM, and Azure SQL DB:
+ - We recommended migrating instances to *SQL Server on Azure VM* as per the Azure best practices.
+ - *Right sized Lift and Shift* - Server to *SQL Server on Azure VM*. We recommend this when SQL Server credentials are not available.
+ - Enhanced user-experience that covers readiness and cost estimates for multiple migration targets for SQL deployments in one assessment.
+
+ ## Update (March 2022) - Perform agentless VMware VM discovery, assessments, and migrations over a private network using Azure Private Link. [Learn more.](how-to-use-azure-migrate-with-private-endpoints.md) - General Availability: Support to select subnets for each Network Interface Card of a replicating virtual machine in VMware agentless migration scenario.
mysql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-supported-versions.md
Azure Database for MySQL currently supports the following versions:
## MySQL Version 5.7
-Bug fix release: 5.7.29
+Bug fix release: 5.7.37
+
+Refer to the MySQL [release notes](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html) to learn more about improvements and fixes in this version.
+
+## MySQL Version 8
+
+Bug fix release: 8.0.28
+
+Refer to the MySQL [release notes](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html) to learn more about improvements and fixes in this version.
+ The service performs automated patching of the underlying hardware, OS, and database engine. The patching includes security and software updates. For MySQL engine, minor version upgrades are also included as part of the planned maintenance release. Users can configure the patching schedule to be system managed or define their custom schedule. During the maintenance schedule, the patch is applied and server may require a restart as part of the patching process to complete the update. With the custom schedule, users can make their patching cycle predictable and choose a maintenance window with minimum impact to the business. In general, the service follows monthly release schedule as part of the continuous integration and release.
-Refer to the MySQL [release notes](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) to learn more about improvements and fixes in this version.
## Managing updates and upgrades The service automatically manages patching for bug fix version updates. For example, 5.7.29 to 5.7.30.
mysql How To Configure High Availability Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-configure-high-availability-cli.md
You can only create server using General purpose or Business Critical pricing t
**Example:** ```azurecli
- az mysql flexible-server create --name myservername --sku-name Standard_D2ds_v4 --tier Genaralpurpose --resource-group myresourcegroup --high-availability ZoneRedundant --location eastus
+ az mysql flexible-server create --name myservername --sku-name Standard_D2ds_v4 --tier GeneralPurpose --resource-group myresourcegroup --high-availability ZoneRedundant --location eastus
``` ## Disable high availability
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Also, when a NSG is deleted, by default the associated flow log resource is dele
- [Logic Apps](https://azure.microsoft.com/services/logic-apps/) > [!NOTE]
-> App services deployed under App Service Plan do not support NSG Flow Logs. Please refer [this documentaion](/articles/app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works.md) for additional details.
+> App services deployed under App Service Plan do not support NSG Flow Logs. Please refer [this documentaion](/azure/app-service/overview-vnet-integration#how-regional-virtual-network-integration-works.md) for additional details.
## Best practices
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
Title: Azure traffic analytics | Microsoft Docs
-description: Learn how to analyze Azure network security group flow logs with traffic analytics.
+description: Learn about traffic analytics. Gain an overview of this solution for viewing network activity, securing networks, and optimizing performance.
documentationcenter: na--+ -+ na Previously updated : 01/04/2021- Last updated : 06/01/2022+ -+
+ - references_regions
+ - devx-track-azurepowershell
+ - kr2b-contr-experiment
-# Traffic Analytics
+# Traffic analytics
+
+Traffic analytics is a cloud-based solution that provides visibility into user and application activity in your cloud networks. Specifically, traffic analytics analyzes Azure Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud. With traffic analytics, you can:
-Traffic Analytics is a cloud-based solution that provides visibility into user and application activity in cloud networks. Traffic analytics analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud. With traffic analytics, you can:
+- Visualize network activity across your Azure subscriptions.
+- Identify hot spots.
+- Secure your network by using information about the following components to identify threats:
-- Visualize network activity across your Azure subscriptions and identify hot spots.-- Identify security threats to, and secure your network, with information such as open-ports, applications attempting internet access, and virtual machines (VM) connecting to rogue networks.-- Understand traffic flow patterns across Azure regions and the internet to optimize your network deployment for performance and capacity.-- Pinpoint network misconfigurations leading to failed connections in your network.
+ - Open ports
+ - Applications that attempt to access the internet
+ - Virtual machines (VMs) that connect to rogue networks
+
+- Optimize your network deployment for performance and capacity by understanding traffic flow patterns across Azure regions and the internet.
+- Pinpoint network misconfigurations that can lead to failed connections in your network.
> [!NOTE]
-> Traffic Analytics now supports collecting NSG Flow Logs data at a higher frequency of 10 mins
+> Traffic analytics now supports collecting NSG flow logs data at a frequency of every 10 minutes.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Why traffic analytics?
-It is vital to monitor, manage, and know your own network for uncompromised security, compliance, and performance. Knowing your own environment is of paramount importance to protect and optimize it. You often need to know the current state of the network, who is connecting, where they're connecting from, which ports are open to the internet, expected network behavior, irregular network behavior, and sudden rises in traffic.
+It's vital to monitor, manage, and know your own network for uncompromised security, compliance, and performance. Knowing your own environment is of paramount importance to protect and optimize it. You often need to know the current state of the network, including the following information:
+
+- Who is connecting to the network?
+- Where are they connecting from?
+- Which ports are open to the internet?
+- What's the expected network behavior?
+- Is there any irregular network behavior?
+- Are there any sudden rises in traffic?
-Cloud networks are different than on-premises enterprise networks, where you have netflow or equivalent protocol capable routers and switches, which provide the capability to collect IP network traffic as it enters or exits a network interface. By analyzing traffic flow data, you can build an analysis of network traffic flow and volume.
+Cloud networks are different from on-premises enterprise networks. In on-premises networks, routers and switches support NetFlow and other, equivalent protocols. You can use these devices to collect data about IP network traffic as it enters or exits a network interface. By analyzing traffic flow data, you can build an analysis of network traffic flow and volume.
-Azure virtual networks have NSG flow logs, which provide you information about ingress and egress IP traffic through a Network Security Group associated to individual network interfaces, VMs, or subnets. By analyzing raw NSG flow logs, and inserting intelligence of security, topology, and geography, traffic analytics can provide you with insights into traffic flow in your environment. Traffic Analytics provides information such as most communicating hosts, most communicating application protocols, most conversing host pairs, allowed/blocked traffic, inbound/outbound traffic, open internet ports, most blocking rules, traffic distribution per Azure datacenter, virtual network, subnets, or, rogue networks.
+With Azure virtual networks, NSG flow logs collect data about the network. These logs provide information about ingress and egress IP traffic through an NSG that's associated with individual network interfaces, VMs, or subnets. After analyzing raw NSG flow logs, traffic analytics combines the log data with intelligence about security, topology, and geography. Traffic analytics then provides you with insights into traffic flow in your environment.
+
+Traffic analytics provides the following information:
+
+- Most-communicating hosts
+- Most-communicating application protocols
+- Most-conversing host pairs
+- Allowed and blocked traffic
+- Inbound and outbound traffic
+- Open internet ports
+- Most-blocking rules
+- Traffic distribution per Azure datacenter, virtual network, subnets, or rogue network
## Key components -- **Network security group (NSG)**: Contains a list of security rules that allow or deny network traffic to resources connected to an Azure Virtual Network. NSGs can be associated to subnets, individual VMs (classic), or individual network interfaces (NIC) attached to VMs (Resource Manager). For more information, see [Network security group overview](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).-- **Network security group (NSG) flow logs**: Allow you to view information about ingress and egress IP traffic through a network security group. NSG flow logs are written in json format and show outbound and inbound flows on a per rule basis, the NIC the flow applies to, five-tuple information about the flow (source/destination IP address, source/destination port, and protocol), and if the traffic was allowed or denied. For more information about NSG flow logs, see [NSG flow logs](network-watcher-nsg-flow-logging-overview.md).-- **Log Analytics**: An Azure service that collects monitoring data and stores the data in a central repository. This data can include events, performance data, or custom data provided through the Azure API. Once collected, the data is available for alerting, analysis, and export. Monitoring applications such as network performance monitor and traffic analytics are built using Azure Monitor logs as a foundation. For more information, see [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).-- **Log Analytics workspace**: An instance of Azure Monitor logs, where the data pertaining to an Azure account, is stored. For more information about Log Analytics workspaces, see [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).-- **Network Watcher**: A regional service that enables you to monitor and diagnose conditions at a network scenario level in Azure. You can turn NSG flow logs on and off with Network Watcher. For more information, see [Network Watcher](network-watcher-monitoring-overview.md).
+- **Network security group (NSG)**: A resource that contains a list of security rules that allow or deny network traffic to resources that are connected to an Azure virtual network. NSGs can be associated with subnets, individual VMs (classic), or individual network interfaces (NICs) that are attached to VMs (Resource Manager). For more information, see [Network security group overview](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+
+- **NSG flow logs**: Recorded information about ingress and egress IP traffic through an NSG. NSG flow logs are written in JSON format and include:
+
+ - Outbound and inbound flows on a per rule basis.
+ - The NIC that the flow applies to.
+ - Information about the flow, such as the source and destination IP address, the source and destination port, and the protocol.
+ - The status of the traffic, such as allowed or denied.
+
+ For more information about NSG flow logs, see [NSG flow logs](network-watcher-nsg-flow-logging-overview.md).
+
+- **Log Analytics**: A tool in the Azure portal that you use to work with Azure Monitor Logs data. Azure Monitor Logs is an Azure service that collects monitoring data and stores the data in a central repository. This data can include events, performance data, or custom data that's provided through the Azure API. After this data is collected, it's available for alerting, analysis, and export. Monitoring applications such as network performance monitor and traffic analytics use Azure Monitor Logs as a foundation. For more information, see [Azure Monitor Logs](../azure-monitor/logs/log-query-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json). Log Analytics provides a way to edit and run queries on logs. You can also use this tool to analyze query results. For more information, see [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+
+- **Log Analytics workspace**: The environment that stores Azure Monitor log data that pertains to an Azure account. For more information about Log Analytics workspaces, see [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+
+- **Network Watcher**: A regional service that you can use to monitor and diagnose conditions at a network-scenario level in Azure. You can use Network Watcher to turn NSG flow logs on and off. For more information, see [Network Watcher](network-watcher-monitoring-overview.md).
## How traffic analytics works
-Traffic analytics examines the raw NSG flow logs and captures reduced logs by aggregating common flows among the same source IP address, destination IP address, destination port, and protocol. For example, Host 1 (IP address: 10.10.10.10) communicating to Host 2 (IP address: 10.10.20.10), 100 times over a period of 1 hour using port (for example, 80) and protocol (for example, http). The reduced log has one entry, that Host 1 & Host 2 communicated 100 times over a period of 1 hour using port *80* and protocol *HTTP*, instead of having 100 entries. Reduced logs are enhanced with geography, security, and topology information, and then stored in a Log Analytics workspace. The following picture shows the data flow:
+Traffic analytics examines raw NSG flow logs. It then reduces the log volume by aggregating flows that have a common source IP address, destination IP address, destination port, and protocol.
-![Data flow for NSG flow logs processing](./media/traffic-analytics/data-flow-for-nsg-flow-log-processing.png)
+An example might involve Host 1 at IP address 10.10.10.10 and Host 2 at IP address 10.10.20.10. Suppose these two hosts communicate 100 times over a period of one hour. The raw flow log has 100 entries in this case. If these hosts use the HTTP protocol on port 80 for each of those 100 interactions, the reduced log has one entry. That entry states that Host 1 and Host 2 communicated 100 times over a period of one hour by using the HTTP protocol on port 80.
+
+Reduced logs are enhanced with geography, security, and topology information and then stored in a Log Analytics workspace. The following diagram shows the data flow:
+ ## Prerequisites
+Before you use traffic analytics, ensure your environment meets the following requirements.
+ ### User access requirements
-Your account must be a member of one of the following [Azure built-in roles](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json):
+One of the following [Azure built-in roles](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) needs to be assigned to your account:
|Deployment model | Role | | | |
Your account must be a member of one of the following [Azure built-in roles](../
| | Reader | | | Network Contributor |
-If your account is not assigned to one of the built-in roles, it must be assigned to a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) that is assigned the following actions, at the subscription level:
+If none of the preceding built-in roles are assigned to your account, assign a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) to your account. The custom role should support the following actions at the subscription level:
-- "Microsoft.Network/applicationGateways/read"-- "Microsoft.Network/connections/read"-- "Microsoft.Network/loadBalancers/read"-- "Microsoft.Network/localNetworkGateways/read"-- "Microsoft.Network/networkInterfaces/read"-- "Microsoft.Network/networkSecurityGroups/read"-- "Microsoft.Network/publicIPAddresses/read"-- "Microsoft.Network/routeTables/read"-- "Microsoft.Network/virtualNetworkGateways/read"-- "Microsoft.Network/virtualNetworks/read"-- "Microsoft.Network/expressRouteCircuits/read"
+- `Microsoft.Network/applicationGateways/read`
+- `Microsoft.Network/connections/read`
+- `Microsoft.Network/loadBalancers/read`
+- `Microsoft.Network/localNetworkGateways/read`
+- `Microsoft.Network/networkInterfaces/read`
+- `Microsoft.Network/networkSecurityGroups/read`
+- `Microsoft.Network/publicIPAddresses/read"`
+- `Microsoft.Network/routeTables/read`
+- `Microsoft.Network/virtualNetworkGateways/read`
+- `Microsoft.Network/virtualNetworks/read`
+- `Microsoft.Network/expressRouteCircuits/read`
-For information on how to check user access permissions, see [Traffic analytics FAQ](traffic-analytics-faq.yml).
+For information about how to check user access permissions, see [Traffic analytics FAQ](traffic-analytics-faq.yml).
## Frequently asked questions
-For frequent asked questions about Traffic Analytics, see [Traffic analytics FAQ](traffic-analytics-faq.yml).
+To get answers to frequently asked questions about traffic analytics, see [Traffic analytics FAQ](traffic-analytics-faq.yml).
## Next steps -- To learn how to enable flow logs, see [Enabling NSG flow logging](network-watcher-nsg-flow-logging-portal.md).-- To understand the schema and processing details of Traffic Analytics, see [Traffic analytics schema](traffic-analytics-schema.md).
+- To learn how to turn on flow logs, see [Enable NSG flow log](network-watcher-nsg-flow-logging-portal.md#enable-nsg-flow-log).
+- To understand the schema and processing details of traffic analytics, see [Traffic analytics schema](traffic-analytics-schema.md).
openshift Howto Configure Ovn Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-configure-ovn-kubernetes.md
keywords: azure, openshift, aro, red hat, azure CLI, azure portal, ovn, ovn-kube
Customer intent: I need to configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters.
-# Configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters
+# Configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters (preview)
This article explains how to Configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters.
-## About the OVN-Kubernetes default Container Network Interface (CNI) network provider (preview)
+## About the OVN-Kubernetes default Container Network Interface (CNI) network provider
-OVN-Kubernetes Container Network Interface (CNI) for Azure Red Hat OpenShift (ARO) cluster is now available for preview.
+OVN-Kubernetes Container Network Interface (CNI) for Azure Red Hat OpenShift cluster is now available for preview.
The OpenShift Container Platform cluster uses a virtualized network for pod and service networks. The OVN-Kubernetes Container Network Interface (CNI) plug-in is a network provider for the default cluster network. OVN-Kubernetes, which is based on the Open Virtual Network (OVN), provides an overlay-based networking implementation.
The process to create an Azure Red Hat OpenShift cluster with OVN is exactly the
The following high-level procedure outlines the steps to create an Azure Red Hat OpenShift cluster with OVN as the network provider:
-1. Install the preview Azure CLI extension.
-2. Verify your permissions.
-3. Register the resource providers.
-4. Create a virtual network containing two empty subnets.
-5. Create an Azure Red Hat OpenShift cluster by using OVN CNI network provider.
-6. Verify the Azure Red Hat OpenShift cluster is using OVN CNI network provider.
+1. Verify your permissions.
+2. Register the resource providers.
+3. Create a virtual network containing two empty subnets.
+4. Create an Azure Red Hat OpenShift cluster by using OVN CNI network provider.
+5. Verify the Azure Red Hat OpenShift cluster is using OVN CNI network provider.
## Verify your permissions
az aro create --resource-group $RESOURCEGROUP \
--master-subnet master-subnet \ --worker-subnet worker-subnet \ --sdn-type OVNKubernetes \
- --pull-secret @pull-secret.txt \
+ --pull-secret @pull-secret.txt
``` ## Verify an Azure Red Hat OpenShift cluster is using the OVN CNI network provider
oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'
``` The value of `status.networkType` must be `OVNKubernetes`.+
+## Recommended content
+
+[Tutorial: Create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md)
orbital Space Partner Program Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/space-partner-program-overview.md
Title: What is the Space Partner Program?
-description: Overview of the Azure Space Partner Program
+ Title: What is the Space Partner Community?
+description: Overview of the Azure Space Partner Community
Last updated 3/21/2022
-# Customer intent: Educate potential partners on how to engage with the Azure Space partner programs.
+# Customer intent: Educate potential partners on how to engage with the Azure Space partner Communities.
-# What is the Space Partner Program?
+# What is the Azure Space Partner Community?
At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. When it comes to space, weΓÇÖre investing in building the tools that will allow every person and organization on Earth to realize the incredible potential of space.
Our differentiated ecosystem of partners spans space operators, manufacturers, s
:::image type="content" source="media/azure-space-partners.png" alt-text="List of all Azure Space partners":::
-## Why join the Space Partner Program?
+## Why join the Azure Space Partner Community?
-We believe in a better together story for Space and Spectrum partners running on Azure. By joining the program, you can gain access to various benefits such as:
+We believe in a better together story for Space and Spectrum partners running on Azure. By joining the community, you can gain access to various benefits such as:
- Azure Engineering Training & Adoption Resources - Quarterly NDA roadmap reviews and newsletters
We believe in a better together story for Space and Spectrum partners running on
- Co-sell and joint GTM coordination - Opportunities to be showcased in Microsoft customer presentations and sales trainings ## Partner Requirements
-To join the program, we ask partners to commit to:
+To join the community, we ask partners to commit to:
- Sign a non-disclosure agreement with Microsoft - Run solution(s) on Azure including Azure monetary commitment
To join the program, we ask partners to commit to:
- [5G core for Gov with Lockheed Martin](https://azure.microsoft.com/blog/new-azure-for-operators-solutions-and-services-built-for-the-future-of-telecommunications/) - [Private network based on SATCOM with Intelsat](https://www.intelsat.com/newsroom/intelsat-collaborates-with-microsoft-to-demonstrate-private-cellular-network-using-intelsats-global-satellite-and-ground-network/) - [Read this public deck on Microsoft Space offerings](https://azurespace.blob.core.windows.net/docs/Azure_Space_Public_Deck.pdf)-- Reach out to [SpacePartnerProgram@microsoft.com](mailto:SpacePartnerProgram@microsoft.com) to learn more and sign a non-disclosure agreement
+- Reach out to [Azure Space Partner Community](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR5Mbl7o3PghInEJV6ey1cpVUMVIzNU5XR0JWQ05RQjU3VDNaT1hDUE1BQS4u) to learn more and sign a non-disclosure agreement
## Next steps
postgresql Howto Ingest Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/howto-ingest-azure-data-factory.md
+
+ Title: Azure Data Factory
+description: Step-by-step guide for using Azure Data Factory for ingestion on Hyperscale Citus
+++++ Last updated : 06/27/2022++
+# How to ingest data using Azure Data Factory
+
+[Azure Data Factory](../../data-factory/introduction.md) (ADF) is a cloud-based
+ETL and data integration service. It allows you to create data-driven workflows
+to move and transform data at scale.
+
+Using Azure Data Factory, you can create and schedule data-driven workflows
+(called pipelines) that ingest data from disparate data stores. Pipelines can
+run on-premises, in Azure, or on other cloud providers for analytics and
+reporting.
+
+ADF has a data sink for Hyperscale (Citus). The data sink allows you to bring
+your data (relational, NoSQL, data lake files) into Hyperscale (Citus) tables
+for storage, processing, and reporting.
+
+![Dataflow diagram for Azure Data Factory.](../media/howto-hyperscale-ingestion/azure-data-factory-architecture.png)
+
+## ADF for real-time ingestion to Hyperscale (Citus)
+
+Here are key reasons to choose Azure Data Factory for ingesting data into
+Hyperscale (Citus):
+
+* **Easy-to-use** - Offers a code-free visual environment for orchestrating and automating data movement.
+* **Powerful** - Uses the full capacity of underlying network bandwidth, up to 5 GiB/s throughput.
+* **Built-in Connectors** - Integrates all your data sources, with more than 90 built-in connectors.
+* **Cost Effective** - Supports a pay-as-you-go, fully managed serverless cloud service that scales on demand.
+
+## Steps to use ADF with Hyperscale (Citus)
+
+In this article, we'll create a data pipeline by using the Azure Data Factory
+user interface (UI). The pipeline in this data factory copies data from Azure
+Blob storage to a database in Hyperscale (Citus). For a list of data stores
+supported as sources and sinks, see the [supported data
+stores](../../data-factory/copy-activity-overview.md#supported-data-stores-and-formats)
+table.
+
+In Azure Data Factory, you can use the **Copy** activity to copy data among
+data stores located on-premises and in the cloud to Hyperscale Citus. If you're
+new to Azure Data Factory, here's a quick guide on how to get started:
+
+1. Once ADF is provisioned, go to your data factory. You'll see the Data
+ Factory home page as shown in the following image:
+
+ :::image type="content" source="../media/howto-hyperscale-ingestion/azure-data-factory-home.png" alt-text="Landing page of Azure Data Factory." border="true":::
+
+2. On the home page, select **Orchestrate**.
+
+ :::image type="content" source="../media/howto-hyperscale-ingestion/azure-data-factory-orchestrate.png" alt-text="Orchestrate page of Azure Data Factory." border="true":::
+
+3. In the General panel under **Properties**, specify the desired pipeline name.
+
+4. In the **Activities** toolbox, expand the **Move and Transform** category,
+ and drag and drop the **Copy Data** activity to the pipeline designer
+ surface. Specify the activity name.
+
+ :::image type="content" source="../media/howto-hyperscale-ingestion/azure-data-factory-pipeline-copy.png" alt-text="Pipeline in Azure Data Factory." border="true":::
+
+5. Configure **Source**
+
+ :::image type="content" source="../media/howto-hyperscale-ingestion/azure-data-factory-configure-source.png" alt-text="Configuring Source in of Azure Data Factory." border="true":::
+
+ 1. Go to the Source tab. Select** + New **to create a source dataset.
+ 2. In the **New Dataset** dialog box, select **Azure Blob Storage**, and then select **Continue**.
+ 3. Choose the format type of your data, and then select **Continue**.
+ 4. Under the **Linked service** text box, select **+ New**.
+ 5. Specify Linked Service name and select your storage account from the **Storage account name** list. Test connection
+ 6. Next to **File path**, select **Browse** and select the desired file from BLOB storage.
+ 7. Select **Ok** to save the configuration.
+
+6. Configure **Sink**
+
+ :::image type="content" source="../media/howto-hyperscale-ingestion/azure-data-factory-configure-sink.png" alt-text="Configuring Sink in of Azure Data Factory." border="true":::
+
+ 1. Go to the Sink tab. Select **+ New** to create a source dataset.
+ 2. In the **New Dataset** dialog box, select **Azure Database for PostgreSQL**, and then select **Continue**.
+ 3. Under the **Linked service** text box, select **+ New**.
+ 4. Specify Linked Service name and select your server group from the list for Hyperscale (Citus) server groups. Add connection details and test the connection
+
+ > [!NOTE]
+ >
+ > If your server group is not present in the drop down, use the **Enter
+ > manually** option to add server details.
+
+ 5. Select the table name where you want to ingest the data.
+ 6. Specify **Write method** as COPY command.
+ 7. Select **Ok** to save the configuration.
+
+7. From the toolbar above the canvas, select **Validate** to validate pipeline
+ settings. Fix errors (if any), revalidate and ensure that the pipeline has
+ been successfully validated.
+
+8. Select Debug from the toolbar execute the pipeline.
+
+ :::image type="content" source="../media/howto-hyperscale-ingestion/azure-data-factory-execute.png" alt-text="Debug and Execute in of Azure Data Factory." border="true":::
+
+9. Once the pipeline can run successfully, in the top toolbar, select **Publish
+ all**. This action publishes entities (datasets, and pipelines) you created
+ to Data Factory.
+
+## Calling a Stored Procedure in ADF
+
+In some specific scenarios, you might want to call a stored procedure/function
+to push aggregated data from staging table to summary table. As of today, ADF
+doesn't offer Stored Procedure activity for Azure Database for Postgres, but as
+a workaround we can use Lookup Activity with query to call a stored procedure
+as shown below:
++
+## Next steps
+
+Learn how to create a [real-time
+dashboard](tutorial-design-database-realtime.md) with Hyperscale (Citus).
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Azure SignalR Service | Microsoft.SignalRService/SignalR | signalr | | Azure SignalR Service | Microsoft.SignalRService/webPubSub | webpubsub | | Azure SQL Database | Microsoft.Sql/servers | SQL Server (sqlServer) |
-| Azure Storage | Microsoft.Storage/storageAccounts | Blob (blob, blob_secondary)<BR> Table (table, table_secondary)<BR> Queue (queue, queue_secondary)<BR> File (file, file_secondary)<BR> Web (web, web_secondary) |
+| Azure Storage | Microsoft.Storage/storageAccounts | Blob (blob, blob_secondary)<BR> Table (table, table_secondary)<BR> Queue (queue, queue_secondary)<BR> File (file, file_secondary)<BR> Web (web, web_secondary)<BR> Dfs (dfs, dfs_secondary) |
| Azure File Sync | Microsoft.StorageSync/storageSyncServices | File Sync Service | | Azure Synapse | Microsoft.Synapse/privateLinkHubs | synapse | | Azure Synapse Analytics | Microsoft.Synapse/workspaces | SQL, SqlOnDemand, Dev |
role-based-access-control Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/overview.md
Access management for cloud resources is a critical function for any organization that is using the cloud. Azure role-based access control (Azure RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
-Azure RBAC is an authorization system built on [Azure Resource Manager](../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources.
+Azure RBAC is an authorization system built on [Azure Resource Manager](../azure-resource-manager/management/overview.md) that provides fine-grained access management to Azure resources.
This video provides a quick overview of Azure RBAC.
search Cognitive Search Concept Image Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-image-scenarios.md
Previously updated : 06/10/2022 Last updated : 06/24/2022
Optionally, you can define projections to accept image-analyzed output into a [k
Image processing is indexer-driven, which means that the raw inputs must be a supported file type (as determined by the skills you choose) from a [supported data source](search-indexer-overview.md#supported-data-sources). + Image analysis supports JPEG, PNG, GIF, and BMP
-+ OCR supports JPEG, PNG, GIF, BMP, and TIF
++ OCR supports JPEG, PNG, BMP, and TIF Images are either standalone binary files or embedded in documents (PDF, RTF, and Microsoft application files). A maximum of 1000 images will be extracted from a given document. If there are more than 1000 images in a document, the first 1000 will be extracted and a warning will be generated.
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-intro.md
Previously updated : 02/01/2022 Last updated : 06/27/2022 # AI enrichment in Azure Cognitive Search
+ Machine translation and language detection support multi-lingual search + Entity recognition finds people, places, and other entities in large chunks of text
-+ Key phrase extraction identifies and then aggregates important terms
-+ Optical Character Recognition (OCR) extracts text from binary files
-+ Image analysis tags and describes images in searchable text fields
++ Key phrase extraction identifies and then outputs important terms++ Optical Character Recognition (OCR) recognizes text in binary files++ Image analysis describes image content and outputs the descriptions as searchable text fields
-AI enrichment is an extension of an [**indexer**](search-indexer-overview.md) pipeline.
+AI enrichment is an extension of an [**indexer pipeline**](search-indexer-overview.md). It has all of the base components (indexer, data source, index), plus a [**skillset**](cognitive-search-working-with-skillsets.md) that specifies atomic enrichment steps.
-[**Blobs in Azure Storage**](../storage/blobs/storage-blobs-overview.md) are the most common data input, but any supported data source can provide the initial content. A [**skillset**](cognitive-search-working-with-skillsets.md), attached to an indexer, adds the AI processing. The indexer extracts content and sets up the pipeline. The skillset performs the enrichment steps. Output is always a [**search index**](search-what-is-an-index.md), and optionally a [**knowledge store**](knowledge-store-concept-intro.md).
+The following diagram shows the progression of AI enrichment:
-![Enrichment pipeline diagram](./media/cognitive-search-intro/cogsearch-architecture.png "enrichment pipeline overview")
+ :::image type="content" source="media/cognitive-search-intro/cognitive-search-enrichment-architecture.png" alt-text="Diagram of an enrichment pipeline." border="true":::
-Skillsets are composed of [*built-in skills*](cognitive-search-predefined-skills.md) from Cognitive Search or [*custom skills*](cognitive-search-create-custom-skill-example.md) for external processing that you provide. Custom skills arenΓÇÖt always complex. For example, if you have an existing package that provides pattern matching or a document classification model, you can wrap it in a custom skill.
+**Import** is the first step. Here, the indexer connects to a data source and pulls content (documents) into the search service. [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md) is the most common resource used in AI enrichment scenarios, but any supported data source can provide content.
-Built-in skills fall into these categories:
+**Enrich & Index** covers most of the AI enrichment pipeline:
-+ **Machine translation** is provided by the [Text Translation](cognitive-search-skill-text-translation.md) skill, often paired with [language detection](cognitive-search-skill-language-detection.md) for multi-language solutions.
++ Enrichment starts when the indexer ["cracks documents"](search-indexer-overview.md#document-cracking) and extracts images and text. The kind of processing that occurs next will depend on your data and which skills you've added to a skillset. If you have images, they can be forwarded to skills that perform image processing. Text content is queued for text and natural language processing. Internally, skills create an "enriched document" that collects the transformations as they occur.
-+ **Image processing** skills include [Optical Character Recognition (OCR)](cognitive-search-skill-ocr.md) and identification of [visual features](cognitive-search-skill-image-analysis.md), such as facial detection, image interpretation, image recognition (famous people and landmarks), or attributes like image orientation. These skills create text representations of image content for full text search in Azure Cognitive Search.
+ Enriched content is generated during skillset execution, and is temporary unless you save it. In order for enriched content to appear in a search index, the indexer must have mapping information so that it can send enriched content to a field in a search index. Output field mappings set up these associations.
-+ **Natural language processing** skills include [Entity Recognition](cognitive-search-skill-entity-recognition-v3.md), [Language Detection](cognitive-search-skill-language-detection.md), [Key Phrase Extraction](cognitive-search-skill-keyphrases.md), text manipulation, [Sentiment Detection (including opinion mining)](cognitive-search-skill-sentiment-v3.md), and [Personal Identifiable Information Detection](cognitive-search-skill-pii-detection.md). With these skills, unstructured text is mapped as searchable and filterable fields in an index.
++ Indexing is the process wherein raw and enriched content is ingested into a [search index](search-what-is-an-index.md) (its files and folders).
-Built-in skills are based on the Cognitive Services APIs: [Computer Vision](../cognitive-services/computer-vision/index.yml) and [Language Service](../cognitive-services/language-service/overview.md). Unless your content input is small, expect to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md) to run larger workloads.
+**Exploration** is the last step. Output is always a [search index](search-what-is-an-index.md) that you can query from a client app. Output can optionally be a [knowledge store](knowledge-store-concept-intro.md) consisting of blobs and tables in Azure Storage that are accessed through data exploration tools or downstream processes. [Field mappings](search-indexer-field-mappings.md), [output field mappings](cognitive-search-output-field-mapping.md), and [projections](knowledge-store-projection-overview.md) determine the data paths that direct content out of the pipeline and into a search index or knowledge store. The same enriched content can appear in both, using implicit or explicit field mappings to send the content to the correct fields.
-## Availability and pricing
-
-AI enrichment is available in regions that have Azure Cognitive Services. You can check the availability of AI enrichment on the [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page. AI enrichment is available in all regions except:
-
-+ Australia Southeast
-+ China North 2
-+ Germany West Central
-
-Billing follows a pay-as-you-go pricing model. The costs of using built-in skills are passed on when a multi-region Cognitive Services key is specified in the skillset. There are also costs associated with image extraction, as metered by Cognitive Search. Text extraction and utility skills, however, aren't billable. For more information, see [How you're charged for Azure Cognitive Search](search-sku-manage-costs.md#how-youre-charged-for-azure-cognitive-search).
+<!-- ![Enrichment pipeline diagram](./media/cognitive-search-intro/cogsearch-architecture.png "enrichment pipeline") -->
## When to use AI enrichment
-Enrichment is useful if raw content is unstructured text, image content, or content that needs language detection and translation. Applying AI through the built-in cognitive skills can unlock this content for full text search and data science applications.
+Enrichment is useful if raw content is unstructured text, image content, or content that needs language detection and translation. Applying AI through the [*built-in skills*](cognitive-search-predefined-skills.md) can unlock this content for full text search and data science applications.
-Enrichment also unlocks external processing. Open-source, third-party, or first-party code can be integrated into the pipeline as a custom skill. Classification models that identify salient characteristics of various document types fall into this category, but any external package that adds value to your content could be used.
+Enrichment also unlocks external processing that you provide. Open-source, third-party, or first-party code can be integrated into the pipeline as a custom skill. Classification models that identify salient characteristics of various document types fall into this category, but any external package that adds value to your content could be used.
### Use-cases for built-in skills
-A [skillset](cognitive-search-defining-skillset.md) that's assembled using built-in skills is well suited for the following application scenarios:
-
-+ [Optical Character Recognition (OCR)](cognitive-search-skill-ocr.md) that recognizes typeface and handwritten text in scanned documents (JPEG) is perhaps the most commonly used skill.
-
-+ [Text translation](cognitive-search-skill-text-translation.md) of multilingual content is another commonly used skill. Language detection is built into Text Translation, but you can also run [Language Detection](cognitive-search-skill-language-detection.md) as a separate skill to output a language code for each chunk of content.
-
-+ PDFs with combined image and text. Embedded text can be extracted without AI enrichment, but adding image and language skills can unlock more information than what could be obtained through standard text-based indexing.
+Built-in skills are based on the Cognitive Services APIs: [Computer Vision](../cognitive-services/computer-vision/index.yml) and [Language Service](../cognitive-services/language-service/overview.md). Unless your content input is small, expect to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md) to run larger workloads.
-+ Unstructured or semi-structured documents containing content that has inherent meaning or organization that is hidden in the larger document.
+A [skillset](cognitive-search-defining-skillset.md) that's assembled using built-in skills is well suited for the following application scenarios:
- Blobs in particular often contain a large body of content that is packed into a single "field". By attaching image and natural language processing skills to an indexer, you can create information that is extant in the raw content, but not otherwise surfaced as distinct fields.
++ **Image processing** skills include [Optical Character Recognition (OCR)](cognitive-search-skill-ocr.md) and identification of [visual features](cognitive-search-skill-image-analysis.md), such as facial detection, image interpretation, image recognition (famous people and landmarks), or attributes like image orientation. These skills create text representations of image content for full text search in Azure Cognitive Search.
- Some ready-to-use built-in cognitive skills that can help: [Key Phrase Extraction](cognitive-search-skill-keyphrases.md) and [Entity Recognition](cognitive-search-skill-entity-recognition-v3.md) (people, organizations, and locations to name a few).
++ **Machine translation** is provided by the [Text Translation](cognitive-search-skill-text-translation.md) skill, often paired with [language detection](cognitive-search-skill-language-detection.md) for multi-language solutions.
- Additionally, built-in skills can also be used restructure content through text split, merge, and shape operations.
++ **Natural language processing** analyzes chunks of text. Skills in this category include [Entity Recognition](cognitive-search-skill-entity-recognition-v3.md), [Sentiment Detection (including opinion mining)](cognitive-search-skill-sentiment-v3.md), and [Personal Identifiable Information Detection](cognitive-search-skill-pii-detection.md). With these skills, unstructured text is mapped as searchable and filterable fields in an index. ### Use-cases for custom skills
-Custom skills can support more complex scenarios, such as recognizing forms, or custom entity detection using a model that you provide and wrap in the [custom skill web interface](cognitive-search-custom-skill-interface.md). Several examples of custom skills include:
+[**Custom skills**](cognitive-search-create-custom-skill-example.md) execute external code that you provide. Custom skills can support more complex scenarios, such as recognizing forms, or custom entity detection using a model that you provide and wrap in the [custom skill web interface](cognitive-search-custom-skill-interface.md). Several examples of custom skills include:
+ [Forms Recognizer](../applied-ai-services/form-recognizer/overview.md) + [Bing Entity Search API](./cognitive-search-create-custom-skill-example.md) + [Custom entity recognition](https://github.com/Microsoft/SkillsExtractorCognitiveSearch)
-## Enrichment steps <a name="enrichment-steps"></a>
-
-An enrichment pipeline consists of [*indexers*](search-indexer-overview.md) that have [*skillsets*](cognitive-search-working-with-skillsets.md). A skillset defines the enrichment steps, and the indexer drives the skillset. When configuring an indexer, you can include properties like output field mappings that send enriched content to a [search index](search-what-is-an-index.md) or projections that define data structures in a [knowledge store](knowledge-store-concept-intro.md).
-
-Post-indexing, you can access content via search requests through all [query types supported by Azure Cognitive Search](search-query-overview.md).
-
-### Step 1: Connection and document cracking phase
-
-Indexers connect to external sources using information provided in an indexer data source. When the indexer connects to the resource, it will ["crack documents"](search-indexer-overview.md#document-cracking) to extract text and images.Image content can be routed to skills that perform image processing, while text content is queued for text processing.
-
-![Document cracking phase](./media/cognitive-search-intro/document-cracking-phase-blowup.png "document cracking")
+Custom skills arenΓÇÖt always complex. For example, if you have an existing package that provides pattern matching or a document classification model, you can wrap it in a custom skill.
-This step assembles all of the initial or raw content that will undergo AI enrichment. For each document, an enrichment tree is created. Initially, the tree is just a root node representation, but it will grow and gain structure during skillset execution.
+## Storing output
-### Step 2: Skillset enrichment phase
+In Azure Cognitive Search, an indexer saves the output it creates. A single indexer run can create up to three data structures that contain enriched and indexed output.
-A skillset defines the atomic operations that are performed on each document. For example, for text and images extracted from a PDF, a skillset might apply entity recognition, language detection, or key phrase extraction to produce new fields in your index that arenΓÇÖt available natively in the source.
-
-![Enrichment phase](./media/cognitive-search-intro/enrichment-phase-blowup.png "enrichment phase")
-
- skillset can be minimal or highly complex, and determines not only the type of processing, but also the order of operations. Most skillsets contain about three to five skills.
-
-A skillset, plus the [output field mappings](cognitive-search-output-field-mapping.md) defined as part of an indexer, fully specifies the enrichment pipeline. For more information about pulling all of these pieces together, see [Define a skillset](cognitive-search-defining-skillset.md).
-
-Internally, the pipeline generates a collection of enriched documents. You can decide which parts of the enriched documents should be mapped to indexable fields in your search index. For example, if you applied the key phrase extraction and the entity recognition skills, those new fields would become part of the enriched document, and can be mapped to fields on your index. See [Annotations](cognitive-search-concept-annotations-syntax.md) to learn more about input/output formations.
-
-### Step 3: Indexing
-
-Indexing is the process wherein raw and enriched content is ingested as fields in a search index, and as [projections](knowledge-store-projection-overview.md) if you're also creating a knowledge store. The same enriched content can appear in both, using implicit or explicit field mappings to send the content to the correct fields.
-
-Enriched content is generated during skillset execution, and is temporary unless you save it. In order for enriched content to appear in a search index, the indexer must have mapping information so that it can send enriched content to a field in a search index. [Output field mappings](cognitive-search-output-field-mapping.md) set up these associations.
-
-## Storing enriched output
-
-In Azure Cognitive Search, an indexer saves the output it creates.
-
-A [**searchable index**](search-what-is-an-index.md) is one of the outputs that is always created by an indexer. Specification of an index is an indexer requirement, and when you attach a skillset, the output of the skillset, plus any fields that are mapped directly from the source, are used to populate the index. Usually, the outputs of specific skills, such as key phrases or sentiment scores, are ingested into the index in fields created for that purpose.
-
-A [**knowledge store**](knowledge-store-concept-intro.md) is an optional output, used for downstream apps like knowledge mining. A knowledge store is defined within a skillset. Its definition determines whether your enriched documents are projected as tables or objects (files or blobs). Tabular projections are recommended for interactive analysis in tools like Power BI. Files and blobs are typically used in data science or similar workloads.
-
-Finally, an indexer can [**cache enriched documents**](cognitive-search-incremental-indexing-conceptual.md) in Azure Blob Storage for potential reuse in subsequent skillset executions. The cache is for internal use. Cached enrichments are consumable by the same skillset that you rerun at a later date. Caching is helpful if your skillset include image analysis or OCR, and you want to avoid the time and expense of reprocessing image files.
+| Data store | Required | Location | Description |
+||-|-|-|
+| [**searchable index**](search-what-is-an-index.md) | Required | Search service | Used for full text search and other query forms. Specifying an index is an indexer requirement. Index content is populated from skill outputs, plus any source fields that are mapped directly to fields in the index. |
+| [**knowledge store**](knowledge-store-concept-intro.md) | Optional | Azure Storage | Used for downstream apps like knowledge mining or data science. A knowledge store is defined within a skillset. Its definition determines whether your enriched documents are projected as tables or objects (files or blobs) in Azure Storage. |
+| [**enrichment cache**](cognitive-search-incremental-indexing-conceptual.md) | Optional | Azure Storage | Used for caching enrichments for reuse in subsequent skillset executions. The cache stores imported, unprocessed content (cracked documents). It also stores the enriched documents created during skillset execution. Caching is particularly helpful if you're using image analysis or OCR, and you want to avoid the time and expense of reprocessing image files. |
Indexes and knowledge stores are fully independent of each other. While you must attach an index to satisfy indexer requirements, if your sole objective is a knowledge store, you can ignore the index after it's populated. Avoid deleting it though. If you want to rerun the indexer and skillset, you'll need the index in order for the indexer to run.
-## Consuming enriched content
+## Exploring content
-The output of AI enrichment is either a [fully text-searchable index](search-what-is-an-index.md) on Azure Cognitive Search, or a [knowledge store](knowledge-store-concept-intro.md) in Azure Storage.
+After you've defined and loaded a [search index](search-what-is-an-index.md) or a [knowledge store](knowledge-store-concept-intro.md), you can explore its data.
-### Check content in a search index
+### Query a search index
[Run queries](search-query-overview.md) to access the enriched content generated by the pipeline. The index is like any other you might create for Azure Cognitive Search: you can supplement text analysis with custom analyzers, invoke fuzzy search queries, add filters, or experiment with scoring profiles to tune search relevance.
-### Check content in a knowledge store
+### Use data exploration tools on a knowledge store
In Azure Storage, a [knowledge store](knowledge-store-concept-intro.md) can assume the following forms: a blob container of JSON documents, a blob container of image objects, or tables in Table Storage. You can use [Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md), [Power BI](knowledge-store-connect-power-bi.md), or any app that connects to Azure Storage to access your content.
In Azure Storage, a [knowledge store](knowledge-store-concept-intro.md) can assu
+ A table is useful if you need slices of enriched documents, or if you want to include or exclude specific parts of the output. For analysis in Power BI, tables are the recommended data source for data exploration and visualization in Power BI.
+## Availability and pricing
+
+Enrichment is available in regions that have Azure Cognitive Services. You can check the availability of enrichment on the [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page. Enrichment is available in all regions except:
+++ Australia Southeast++ China North 2++ Germany West Central+
+Billing follows a pay-as-you-go pricing model. The costs of using built-in skills are passed on when a multi-region Cognitive Services key is specified in the skillset. There are also costs associated with image extraction, as metered by Cognitive Search. Text extraction and utility skills, however, aren't billable. For more information, see [How you're charged for Azure Cognitive Search](search-sku-manage-costs.md#how-youre-charged-for-azure-cognitive-search).
+ ## Checklist: A typical workflow
-1. When beginning a project, it's helpful to work with a subset of data. Indexer and skillset design is an iterative process, and the work goes faster with a small representative data set.
+An enrichment pipeline consists of [*indexers*](search-indexer-overview.md) that have [*skillsets*](cognitive-search-working-with-skillsets.md). A skillset defines the enrichment steps, and the indexer drives the skillset. When configuring an indexer, you can include properties like output field mappings that send enriched content to a [search index](search-what-is-an-index.md) or projections that define data structures in a [knowledge store](knowledge-store-concept-intro.md).
+
+Post-indexing, you can access content via search requests through all [query types supported by Azure Cognitive Search](search-query-overview.md).
+
+1. Start with a subset of data. Indexer and skillset design is an iterative process, and the work goes faster with a small representative data set.
1. Create a [data source](/rest/api/searchservice/create-data-source) that specifies a connection to your data.
-1. Create a [skillset](/rest/api/searchservice/create-skillset) to add enrichment.
+1. Create a [skillset](cognitive-search-defining-skillset.md) to add enrichment steps. If you're using a knowledge store, you'll specify it in this step. Unless you're doing a small proof-of-concept exercise, you'll want to [attach a multi-region Cognitive Services resource](cognitive-search-attach-cognitive-services.md) to the skillset.
+
+1. Create an [index schema](search-how-to-create-search-index.md) that defines a search index.
-1. Create an [index schema](/rest/api/searchservice/create-index) that defines a search index.
+1. Create and run the [indexer](search-howto-create-indexers.md) to bring all of the above components together. This step retrieves the data, runs the skillset, and loads the index. An indexer is also where you specify field mappings and output field mappings that set up the data path to a search index.
-1. Create an [indexer](/rest/api/searchservice/create-indexer) to bring all of the above components together. This step retrieves the data, runs the skillset, and loads the index.
+ If possible, [enable enrichment caching](cognitive-search-incremental-indexing-conceptual.md) in the indexer configuration. This step allows you to reuse existing enrichments later on.
-1. Run queries to evaluate results and modify code to update skillsets, schema, or indexer configuration.
+1. Run [queries](search-query-create.md) to evaluate results and modify code to update skillsets, schema, or indexer configuration.
-To repeat any of the above steps, [reset the indexer](search-howto-reindex.md) before you run it. Or, delete and recreate the objects on each run (recommended if youΓÇÖre using the free tier). You should also [enable enrichment caching](cognitive-search-incremental-indexing-conceptual.md) to reuse existing enrichments wherever possible.
+1. To repeat any of the above steps, [reset the indexer](search-howto-reindex.md) before you run it. Or, delete and recreate the objects on each run (recommended if youΓÇÖre using the free tier). If you enabled caching the indexer will pull from the cache if data is unchanged at the source, and if your edits to the pipeline don't invalidate the cache.
## Next steps
search Cognitive Search Skill Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-image-analysis.md
Previously updated : 05/06/2022 Last updated : 06/24/2022 # Image Analysis cognitive skill
The **Image Analysis** skill extracts a rich set of visual features based on the
This skill uses the machine learning models provided by [Computer Vision](../cognitive-services/computer-vision/overview.md) in Cognitive Services. **Image Analysis** works on images that meet the following requirements:
-+ The image must be presented in JPEG, PNG, GIF, or BMP format
++ The image must be presented in JPEG, PNG, GIF or BMP format + The file size of the image must be less than 4 megabytes (MB) + The dimensions of the image must be greater than 50 x 50 pixels
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `defaultLanguageCode` | A string indicating the language to return. The service returns recognition results in a specified language. If this parameter isn't specified, the default value is "en". <br/><br/>Supported languages are: <br/>`en` - English (default) <br/>`es` - Spanish <br/>`ja` - Japanese <br/>`pt` - Portuguese <br/>`zh` - Simplified Chinese|
-| `visualFeatures` | An array of strings indicating the visual feature types to return. Valid visual feature types include: <ul><li>*adult* - detects if the image is pornographic (depicts nudity or a sex act), gory (depicts extreme violence or blood) or suggestive (also known as racy content). </li><li>*brands* - detects various brands within an image, including the approximate location. The *brands* visual feature is only available in English.</li><li> *categories* - categorizes image content according to a [taxonomy](../cognitive-services/Computer-vision/Category-Taxonomy.md) defined by Cognitive Services. </li><li>*description* - describes the image content with a complete sentence in supported languages.</li><li>*faces* - detects if faces are present. If present, generates coordinates, gender and age. </li><li>*objects* - detects various objects within an image, including the approximate location. The *objects* visual feature is only available in English.</li><li> *tags* - tags the image with a detailed list of words related to the image content.</li></ul> Names of visual features are case-sensitive. Both *color* and *imageType* visual features have been deprecated, but you can access this functionality through a [custom skill](./cognitive-search-custom-skill-interface.md).|
+| `defaultLanguageCode` | A string indicating the language to return. The service returns recognition results in a specified language. If this parameter isn't specified, the default value is "en". <br/><br/>Supported languages include all generally available languages documented under the [Cognitive Services Computer Vision language support documentation](../cognitive-services/computer-vision/language-support.md#image-analysis).|
+| `visualFeatures` | An array of strings indicating the visual feature types to return. Valid visual feature types include: <ul><li>*adult* - detects if the image is pornographic (depicts nudity or a sex act), gory (depicts extreme violence or blood) or suggestive (also known as racy content). </li><li>*brands* - detects various brands within an image, including the approximate location. </li><li> *categories* - categorizes image content according to a [taxonomy](../cognitive-services/Computer-vision/Category-Taxonomy.md) defined by Cognitive Services. </li><li>*description* - describes the image content with a complete sentence in supported languages.</li><li>*faces* - detects if faces are present. If present, generates coordinates, gender and age. </li><li>*objects* - detects various objects within an image, including the approximate location. </li><li> *tags* - tags the image with a detailed list of words related to the image content.</li></ul> Names of visual features are case-sensitive. Both *color* and *imageType* visual features have been deprecated, but you can access this functionality through a [custom skill](./cognitive-search-custom-skill-interface.md). Refer to the [Computer Vision Image Analysis documentation](../cognitive-services/computer-vision/language-support.md#image-analysis) on which visual features are supported with each `defaultLanguageCode`.|
| `details` | An array of strings indicating which domain-specific details to return. Valid visual feature types include: <ul><li>*celebrities* - identifies celebrities if detected in the image.</li><li>*landmarks* - identifies landmarks if detected in the image. </li></ul> | ## Skill inputs
Parameters are case-sensitive.
| `categories` | Output is an array of [category](../cognitive-services/computer-vision/concept-categorizing-images.md) objects, where each category object is a complex type consisting of a `name` (string), `score` (double), and optional `detail` that contains celebrity or landmark details. See the [category taxonomy](../cognitive-services/Computer-vision/Category-Taxonomy.md) for the full list of category names. A detail is a nested complex type. A celebrity detail consists of a name, confidence score, and face bounding box. A landmark detail consists of a name and confidence score.| | `description` | Output is a single [description](../cognitive-services/computer-vision/concept-describing-images.md) object of a complex type, consisting of lists of `tags` and `caption` (an array consisting of `Text` (string) and `confidence` (double)). | | `faces` | Complex type consisting of `age`, `gender`, and `faceBoundingBox` having four bounding box coordinates (in pixels) indicating placement inside the image. Coordinates are `top`, `left`, `width`, `height`.|
-| `objects` | Output is an array of [visual feature objects](../cognitive-services/computer-vision/concept-object-detection.md) Each object is a complex type, consisting of `object` (string), `confidence` (double), `rectangle` (with four bounding box coordinates indicating placement inside the image), and a `parent` that contains an object name and confidence . |
+| `objects` | Output is an array of [visual feature objects](../cognitive-services/computer-vision/concept-object-detection.md). Each object is a complex type, consisting of `object` (string), `confidence` (double), `rectangle` (with four bounding box coordinates indicating placement inside the image), and a `parent` that contains an object name and confidence . |
| `tags` | Output is an array of [imageTag](../cognitive-services/computer-vision/concept-detecting-image-types.md) objects, where a tag object is a complex type consisting of `name` (string), `hint` (string), and `confidence` (double). The addition of a hint is rare. It's only generated if a tag is ambiguous. For example, an image tagged as "curling" might have a hint of "sports" to better indicate its content. | + ## Sample skill definition ```json
If you get the error similar to `"One or more skills are invalid. Details: Error
+ [Built-in skills](cognitive-search-predefined-skills.md) + [How to define a skillset](cognitive-search-defining-skillset.md) + [Extract text and information from images](cognitive-search-concept-image-scenarios.md)
-+ [Create Indexer (REST)](/rest/api/searchservice/create-indexer)
++ [Create Indexer (REST)](/rest/api/searchservice/create-indexer)
search Cognitive Search Skill Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-ocr.md
Previously updated : 06/09/2022 Last updated : 06/24/2022 # OCR cognitive skill The **Optical character recognition (OCR)** skill recognizes printed and handwritten text in image files. This article is the reference documentation for the OCR skill. See [Extract text from images](cognitive-search-concept-image-scenarios.md) for usage instructions.
-An OCR skill uses the machine learning models provided by [Computer Vision](../cognitive-services/computer-vision/overview.md) API [v3.0](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-ga/operations/5d986960601faab4bf452005) in Cognitive Services. The **OCR** skill maps to the following functionality:
+An OCR skill uses the machine learning models provided by [Computer Vision](../cognitive-services/computer-vision/overview.md) API [v3.2](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) in Cognitive Services. The **OCR** skill maps to the following functionality:
-+ For English, Spanish, German, French, Italian, Portuguese, and Dutch, the new ["Read"](../cognitive-services/computer-vision/overview-ocr.md#read-api) API is used.
-+ For all other languages, the [legacy OCR](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-ga/operations/56f91f2e778daf14a499f20d) API is used.
++ For the languages listed under the [Cognitive Services Computer Vision language support documentation](../cognitive-services/computer-vision/language-support.md#optical-character-recognition-ocr), the ["Read"](../cognitive-services/computer-vision/overview-ocr.md#read-api) API is used.++ For Greek and Serbian Cyrillic, the [legacy OCR](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) API is used. The **OCR** skill extracts text from image files. Supported file formats include:
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `detectOrientation` | Detects image orientation. Valid values are `true` or `false`.|
-| `defaultLanguageCode` | Language code of the input text. Supported languages include: <br/> `zh-Hans` (ChineseSimplified) <br/> `zh-Hant` (ChineseTraditional) <br/>`cs` (Czech) <br/>`da` (Danish) <br/>`nl` (Dutch) <br/>`en` (English) <br/>`fi` (Finnish) <br/>`fr` (French) <br/>`de` (German) <br/>`el` (Greek) <br/>`hu` (Hungarian) <br/>`it` (Italian) <br/>`ja` (Japanese) <br/>`ko` (Korean) <br/>`nb` (Norwegian) <br/>`pl` (Polish) <br/>`pt` (Portuguese) <br/>`ru` (Russian) <br/>`es` (Spanish) <br/>`sv` (Swedish) <br/>`tr` (Turkish) <br/>`ar` (Arabic) <br/>`ro` (Romanian) <br/>`sr-Cyrl` (SerbianCyrillic) <br/>`sr-Latn` (SerbianLatin) <br/>`sk` (Slovak) <br/>`unk` (Unknown) <br/><br/> If the language code is unspecified or null, the language is set to English. If the language is explicitly set to `unk`, the language is auto-detected. </p> |
+| `detectOrientation` | Detects image orientation. Valid values are `true` or `false`. <br/><br/> This parameter only applies if the [legacy OCR](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) API is used. |
+| `defaultLanguageCode` | Language code of the input text. Supported languages include all generally available languages documented under the [Cognitive Services Computer Vision language support documentation](../cognitive-services/computer-vision/language-support.md#optical-character-recognition-ocr) and `unk` (Unknown). <br/><br/> If the language code is unspecified or null, the language is set to English. If the language is explicitly set to `unk`, all languages found are auto-detected and returned. </p> |
| `lineEnding` | The value to use as a line separator. Possible values: "Space", "CarriageReturn", "LineFeed". The default is "Space". | In previous versions, there was a parameter called "textExtractionAlgorithm" to specify extraction of "printed" or "handwritten" text. This parameter is deprecated because the current Read API algorithm extracts both types of text at once. If your skill includes this parameter, you don't need to remove it, but it won't be used during skill execution.
search Index Add Scoring Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-scoring-profiles.md
Previously updated : 06/16/2021 Last updated : 06/24/2022 + # Add scoring profiles to a search index For full text search queries, the search engine computes a search score for each matching document, which allows results to be ranked from high to low. Azure Cognitive Search uses a default scoring algorithm to compute an initial score, but you can customize the calculation through a *scoring profile*. Scoring profiles are embedded in index definitions and include properties for boosting the score of matches, where additional criteria found in the profile provides the boosting logic. For example, you might want to boost matches based on their revenue potential, promote newer items, or perhaps boost items that have been in inventory too long.
-Unfamiliar with relevance concepts? The following video segment fast-forwards to how scoring profiles work in Azure Cognitive Search, but the video also covers basic concepts. You might also want to review [Similarity ranking and scoring](index-similarity-and-scoring.md) for more background.
+Unfamiliar with relevance concepts? The following video segment fast-forwards to how scoring profiles work in Azure Cognitive Search, but the video also covers basic concepts. You might also want to review [Relevance and scoring in Azure Cognitive Search](index-similarity-and-scoring.md) for more background.
> [!VIDEO https://www.youtube.com/embed/Y_X6USgvB1g?version=3&start=463&end=970] ## What is a scoring profile?
-A scoring profile is part of the index definition and is composed of weighted fields, functions and parameters. The purpose of a scoring profile is to boost or amplify matching documents based on criteria you provide.
+A scoring profile is part of the index definition and is composed of weighted fields, functions, and parameters. The purpose of a scoring profile is to boost or amplify matching documents based on criteria you provide.
-The following definition shows a simple profile named 'geo'. This one boosts results that have the search term in the hotelName field. It also uses the `distance` function to favor results that are within ten kilometers of the current location. If someone searches on the term 'inn', and 'inn' happens to be part of the hotel name, documents that include hotels with 'inn' within a 10 KM radius of the current location will appear higher in the search results.
+The following definition shows a simple profile named 'geo'. This example boosts results that have the search term in the hotelName field. It also uses the `distance` function to favor results that are within 10 kilometers of the current location. If someone searches on the term 'inn', and 'inn' happens to be part of the hotel name, documents that include hotels with 'inn' within a 10 KM radius of the current location will appear higher in the search results.
```json "scoringProfiles": [
POST /indexes/hotels/docs&api-version=2020-06-30
} ```
-This query searches on the term "inn" and passes in the current location. Notice that this query includes other parameters, such as scoringParameter. Query parameters are described in [Search Documents (REST API)](/rest/api/searchservice/Search-Documents).
+This query searches on the term "inn" and passes in the current location. Notice that this query includes other parameters, such as scoringParameter. Query parameters, including "scoringParameter", are described in [Search Documents (REST API)](/rest/api/searchservice/Search-Documents).
See the [Extended example](#bkmk_ex) to review a more detailed example of a scoring profile.
See the [Extended example](#bkmk_ex) to review a more detailed example of a scor
Scores are computed for full text search queries for the purpose of ranking the most relevant matches and returning them at the top of the response. The overall score for each document is an aggregation of the individual scores for each field, where the individual score of each field is computed based on the term frequency and document frequency of the searched terms within that field (known as [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) or term frequency-inverse document frequency).
-> [!Tip]
+> [!TIP]
> You can use the [featuresMode](index-similarity-and-scoring.md#featuresmode-parameter-preview) parameter to request additional scoring details with the search results (including the field level scores). ## When to add scoring logic You should create one or more scoring profiles when the default ranking behavior doesnΓÇÖt go far enough in meeting your business objectives. For example, you might decide that search relevance should favor newly added items. Likewise, you might have a field that contains profit margin, or some other field indicating revenue potential. Boosting results that are more meaningful to your users or the business is often the deciding factor in adoption of scoring profiles.
-Relevancy-based ordering in a search page is also implemented through scoring profiles. Consider search results pages youΓÇÖve used in the past that let you sort by price, date, rating, or relevance. In Azure Cognitive Search, scoring profiles can be used to drive the ΓÇÿrelevanceΓÇÖ option. The definition of relevance is controlled by you, predicated on business objectives and the type of search experience you want to deliver.
+Relevancy-based ordering in a search page is also implemented through scoring profiles. Consider search results pages youΓÇÖve used in the past that let you sort by price, date, rating, or relevance. In Azure Cognitive Search, scoring profiles can be used to drive the ΓÇÿrelevanceΓÇÖ option. The definition of relevance is user-defined, predicated on business objectives and the type of search experience you want to deliver.
<a name="bkmk_ex"></a>
Scoring profiles can be defined in Azure portal as shown in the following screen
### Using weighted fields
-Use weighted fields when field context is important and queries are full text search (also known as free form text search). For example, if a query includes the term "airport", you might want "airport" in the Description field to have more weight than in the HotelName.
+Use weighted fields when field context is important and queries are full text search. For example, if a query includes the term "airport", you might want "airport" in the Description field to have more weight than in the HotelName.
Weighted fields are composed of a searchable field and a positive number that is used as a multiplier. If the original field score of HotelName is 3, the boosted score for that field becomes 6, contributing to a higher overall score for the parent document itself.
Weighted fields are composed of a searchable field and a positive number that is
### Using functions
-Use functions when simple relative weights are insufficient or don't apply, as in the case of distance and freshness, which are calculations over numeric data. You can specify multiple functions per scoring profile.
+Use functions when simple relative weights are insufficient or don't apply, as is the case of distance and freshness, which are calculations over numeric data. You can specify multiple functions per scoring profile.
| Function | Description | |-|-|
-| "freshness" | Boosts by values in a datetime field (Edm.DateTimeOffset). This function has a `boostingDuration` attribute so that you can specify a value representing a timespan over which boosting occurs. |
-| "magnitude" | Boosts based on how high or low a numeric value is. Scenarios that call for this function include boosting by profit margin, highest price, lowest price, or a count of downloads. This function can only be used with Edm.Double and Edm.Int fields. For the magnitude function, you can reverse the range, high to low, if you want the inverse pattern (for example, to boost lower-priced items more than higher-priced items). Given a range of prices from $100 to $1, you would set "boostingRangeStart" at 100 and "boostingRangeEnd" at 1 to boost the lower-priced items. |
-| "distance" | Boosts by proximity or geographic location. This function can only be used with Edm.GeographyPoint fields. |
-| "tag" | Boosts by tags that are common to both search documents and query strings. Tags are provided in a `tagsParameter`. This function can only be used with Edm.String and Collection(Edm.String) fields. |
+| "freshness" | Boosts by values in a datetime field (`Edm.DateTimeOffset`). This function has a "boostingDuration" attribute so that you can specify a value representing a timespan over which boosting occurs. |
+| "magnitude" | Boosts based on how high or low a numeric value is. Scenarios that call for this function include boosting by profit margin, highest price, lowest price, or a count of downloads. This function can only be used with `Edm.Double` and `Edm.Int` fields. For the magnitude function, you can reverse the range, high to low, if you want the inverse pattern (for example, to boost lower-priced items more than higher-priced items). Given a range of prices from $100 to $1, you would set "boostingRangeStart" at 100 and "boostingRangeEnd" at 1 to boost the lower-priced items. |
+| "distance" | Boosts by proximity or geographic location. This function can only be used with `Edm.GeographyPoint` fields. |
+| "tag" | Boosts by tags that are common to both search documents and query strings. Tags are provided in a "tagsParameter". This function can only be used with search fields of type `Edm.String` and `Collection(Edm.String)`. |
### Rules for using functions + Functions can only be applied to fields that are attributed as filterable. + Function type ("freshness", "magnitude", "distance", "tag") must be lower case.
-+ Functions cannot include null or empty values.
++ Functions can't include null or empty values. <a name="bkmk_template"></a>
Use functions when simple relative weights are insufficient or don't apply, as i
|Attribute|Description| ||--|
-| name | Required. This is the name of the scoring profile. It follows the same naming conventions of a field. It must start with a letter, cannot contain dots, colons or @ symbols, and cannot start with the phrase azureSearch (case-sensitive).|
+| name | Required. This is the name of the scoring profile. It follows the same naming conventions of a field. It must start with a letter, can't contain dots, colons or @ symbols, and can't start with the phrase azureSearch (case-sensitive).|
| text | Contains the weights property.| | weights | Optional. Name-value pairs that specify a searchable field and a positive integer or floating-point number by which to boost a field's score. The positive integer or number becomes a multiplier for the original field score generated by the ranking algorithm. For example, if a field score is 2 and the weight value is 3, the boosted score for the field becomes 6. Individual field scores are then aggregated to create a document field score, which is then used to rank the document in the result set. | | functions | Optional. A scoring function can only be applied to fields that are filterable.| | functions > type | Required for scoring functions. Indicates the type of function to use. Valid values include magnitude, freshness, distance, and tag. You can include more than one function in each scoring profile. The function name must be lower case.|
-| functions > boost | Required for scoring functions. A positive number used as multiplier for raw score. It cannot be equal to 1.|
+| functions > boost | Required for scoring functions. A positive number used as multiplier for raw score. It can't be equal to 1.|
| functions > fieldname | Required for scoring functions. A scoring function can only be applied to fields that are part of the field collection of the index, and that are filterable. In addition, each function type introduces additional restrictions (freshness is used with datetime fields, magnitude with integer or double fields, and distance with location fields). You can only specify a single field per function definition. For example, to use magnitude twice in the same profile, you would need to include two definitions magnitude, one for each field.| | functions > interpolation | Required for scoring functions. Defines the slope for which the score boosting increases from the start of the range to the end of the range. Valid values include Linear (default), Constant, Quadratic, and Logarithmic. See [Set interpolations](#bkmk_interpolation) for details.| | functions > magnitude | The magnitude scoring function is used to alter rankings based on the range of values for a numeric field. Some of the most common usage examples of this are: </br></br>"Star ratings:" Alter the scoring based on the value within the "Star Rating" field. When two items are relevant, the item with the higher rating will be displayed first. </br>"Margin:" When two documents are relevant, a retailer may wish to boost documents that have higher margins first. </br>"Click counts:" For applications that track click through actions to products or pages, you could use magnitude to boost items that tend to get the most traffic. </br>"Download counts:" For applications that track downloads, the magnitude function lets you boost items that have the most downloads.|
Use functions when simple relative weights are insufficient or don't apply, as i
| functions > magnitude > constantBoostBeyondRange | Valid values are true or false (default). When set to true, the full boost will continue to apply to documents that have a value for the target field thatΓÇÖs higher than the upper end of the range. If false, the boost of this function wonΓÇÖt be applied to documents having a value for the target field that falls outside of the range.| | functions > freshness | The freshness scoring function is used to alter ranking scores for items based on values in DateTimeOffset fields. For example, an item with a more recent date can be ranked higher than older items. </br></br>It is also possible to rank items like calendar events with future dates such that items closer to the present can be ranked higher than items further in the future. </br></br>In the current service release, one end of the range will be fixed to the current time. The other end is a time in the past based on the boostingDuration. To boost a range of times in the future, use a negative boostingDuration. </br></br>The rate at which the boosting changes from a maximum and minimum range is determined by the Interpolation applied to the scoring profile (see the figure below). To reverse the boosting factor applied, choose a boost factor of less than 1.| | functions > freshness > boostingDuration | Sets an expiration period after which boosting will stop for a particular document. See [Set boostingDuration](#bkmk_boostdur) in the following section for syntax and examples.|
-| functions > distance | The distance scoring function is used to affect the score of documents based on how close or far they are relative to a reference geographic location. The reference location is given as part of the query in a parameter (using the scoringParameter query parameter) as a lon,lat argument.|
-|functions > distance > referencePointParameter | A parameter to be passed in queries to use as reference location (using the scoringParameter query parameter). See [Search Documents (REST API)](/rest/api/searchservice/Search-Documents) for descriptions of query parameters.|
+| functions > distance | The distance scoring function is used to affect the score of documents based on how close or far they are relative to a reference geographic location. The reference location is given as part of the query in a parameter (using the scoringParameter query parameter) as a `lon,lat` argument.|
+|functions > distance > referencePointParameter | A parameter to be passed in queries to use as reference location (using the scoringParameter query parameter). |
| functions > distance > boostingDistance | A number that indicates the distance in kilometers from the reference location where the boosting range ends.| | functions > tag | The tag scoring function is used to affect the score of documents based on tags in documents and search queries. Documents that have tags in common with the search query will be boosted. The tags for the search query are provided as a scoring parameter in each search request (using the scoringParameter query parameter). |
-| functions > tag > tagsParameter | A parameter to be passed in queries to specify tags for a particular request (using the scoringParameter query parameter). See [Search Documents (REST API)](/rest/api/searchservice/Search-Documents) for descriptions of query parameters.|
+| functions > tag > tagsParameter | A parameter to be passed in queries to specify tags for a particular request (using the scoringParameter query parameter). The parameter consists of a comma-delimited list of whole terms. If a given tag within the list is itself a comma-delimited list, you can [use a text normalizer](search-normalizers.md) on the field to strip out the commas at query time (map the comma character to a space). This approach will "flatten" the list so that all terms are a single, long string of comma-delimited terms. |
| functions > functionAggregation | Optional. Applies only when functions are specified. Valid values include: sum (default), average, minimum, maximum, and firstMatching. A search score is single value that is computed from multiple variables, including multiple functions. This attribute indicates how the boosts of all the functions are combined into a single aggregate boost that then is applied to the base document score. The base score is based on the [tf-idf](http://www.tfidf.com/) value computed from the document and the search query.| | defaultScoringProfile | When executing a search request, if no scoring profile is specified, then default scoring is used ([tf-idf](http://www.tfidf.com/) only). </br></br>You can override the built-in default, substituting a custom profile as the one to use when no specific profile is given in the search request.|
For more examples, see [XML Schema: Datatypes (W3.org web site)](https://www.w3.
## See also
-+ [Similarity ranking and scoring in Azure Cognitive Search](index-similarity-and-scoring.md)
++ [Relevance and scoring in Azure Cognitive Search](index-similarity-and-scoring.md) + [REST API Reference](/rest/api/searchservice/) + [Create Index API](/rest/api/searchservice/create-index) + [Azure Cognitive Search .NET SDK](/dotnet/api/overview/azure/search?)
search Search Indexer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-troubleshooting.md
Previously updated : 06/20/2022 Last updated : 06/24/2022 # Indexer troubleshooting guidance for Azure Cognitive Search
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dns-normalization-schema.md
The following list mentions fields that have specific guidelines for DNS events:
| | | | | | **EventType** | Mandatory | Enumerated | Indicates the operation reported by the record. <br><br> For DNS records, this value would be the [DNS op code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `lookup`| | **EventSubType** | Optional | Enumerated | Either `request` or `response`. <br><br>For most sources, [only the responses are logged](#guidelines-for-collecting-dns-events), and therefore the value is often **response**. |
-| <a name=eventresultdetails></a>**EventResultDetails** | Mandatory | Enumerated | For DNS events, this field provides the [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Note**: IANA doesn't define the case for the values, so analytics must normalize the case. If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br><br> If this record represents a request and not a response, set to **NA**. <br><br>Example: `NXDOMAIN` |
+| <a name=eventresultdetails></a>**EventResultDetails** | Mandatory | Enumerated | For DNS events, this field provides the [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Notes**:<br>- IANA doesn't define the case for the values, so analytics must normalize the case.<br> - If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br>- If this record represents a request and not a response, set to **NA**. <br><br>Example: `NXDOMAIN` |
| **EventSchemaVersion** | Mandatory | String | The version of the schema documented here is **0.1.3**. | | **EventSchema** | Mandatory | String | The name of the schema documented here is **Dns**. | | **Dvc** fields| - | - | For DNS events, device fields refer to the system that reports the DNS event. |
The fields listed in this section are specific to DNS events, although many are
| **DstGeoCity** | Optional | City | The city associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Burlington` | | **DstGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `44.475833` | | **DstGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `73.211944` |
-| **DstcRiskLevel** | Optional | Integer | The risk level associated with the destination. The value should be adjusted to a range of 0 to 100, which 0 being benign and 100 being a high risk.<br><br>Example: `90` |
+| **DstRiskLevel** | Optional | Integer | The risk level associated with the destination. The value should be adjusted to a range of 0 to 100, which 0 being benign and 100 being a high risk.<br><br>Example: `90` |
| **DstPortNumber** | Optional | Integer | Destination Port number.<br><br>Example: `53` | | <a name="dsthostname"></a>**DstHostname** | Optional | String | The destination device hostname, excluding domain information. If no device name is available, store the relevant IP address in this field.<br><br>Example: `DESKTOP-1282V4D`<br><br>**Note**: This value is mandatory if [DstIpAddr](#dstipaddr) is specified. | | <a name="dstdomain"></a>**DstDomain** | Optional | String | The domain of the destination device.<br><br>Example: `Contoso` |
The fields listed in this section are specific to DNS events, although many are
| <a name="dstdvcid"></a>**DstDvcId** | Optional | String | The ID of the destination device as reported in the record.<br><br>Example: `ac7e9755-8eae-4ffc-8a02-50ed7a2216c3` | | **DstDvcIdType** | Optional | Enumerated | The type of [DstDvcId](#dstdvcid), if known. Possible values include:<br> - `AzureResourceId`<br>- `MDEidIf`<br><br>If multiple IDs are available, use the first one from the list above, and store the others in the **DstDvcAzureResourceId** or **DstDvcMDEid** fields, respectively.<br><br>Required if **DstDeviceId** is used.| | **DstDeviceType** | Optional | Enumerated | The type of the destination device. Possible values include:<br>- `Computer`<br>- `Mobile Device`<br>- `IOT Device`<br>- `Other` |
-| <a name=query></a>**DnsQuery** | Mandatory | FQDN | The domain that the request tries to resolve. <br><br>**Note**: Some sources send the query in different formats. For example, in the DNS protocol itself, the query includes a dot (**.**)at the end, which must be removed.<br><br>While the DNS protocol allows for multiple queries in a single request, this scenario is rare, if it's found at all. If the request has multiple queries, store the first one in this field, and then and optionally keep the rest in the [AdditionalFields](normalization-common-fields.md#additionalfields) field.<br><br>Example: `www.malicious.com` |
+| <a name=query></a>**DnsQuery** | Mandatory | String | The domain that the request tries to resolve. <br><br>**Notes**:<br> - Some sources send valid FQDN queries in a different format. For example, in the DNS protocol itself, the query includes a dot (**.**) at the end, which must be removed.<br>- While the DNS protocol limits the type of value in this field to an FQDN, most DNS servers allow any value, and this field is therefore not limited to FQDN values only. Most notably, DNS tunneling attacks may use invalid FQDN values in the query field.<br>- While the DNS protocol allows for multiple queries in a single request, this scenario is rare, if it's found at all. If the request has multiple queries, store the first one in this field, and then and optionally keep the rest in the [AdditionalFields](normalization-common-fields.md#additionalfields) field.<br><br>Example: `www.malicious.com` |
| **Domain** | Alias | | Alias to [DnsQuery](#query). | | **DnsQueryType** | Optional | Integer | The [DNS Resource Record Type codes](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `28`|
-| **DnsQueryTypeName** | Recommended | Enumerated | The [DNS Resource Record Type](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml) names. <br><br>**Note**: IANA doesn't define the case for the values, so analytics must normalize the case as needed. If the source provides only a numerical query type code and not a query type name, the parser must include a lookup table to enrich with this value.<br><br>Example: `AAAA`|
+| **DnsQueryTypeName** | Recommended | Enumerated | The [DNS Resource Record Type](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml) names. <br><br>**Notse**:<br> -IANA doesn't define the case for the values, so analytics must normalize the case as needed.<br>- The value `ANY` is supported for the response code 255.<br> - The value `TYPExxxx` is supported for unmapped response codes, where `xxxx` is the numerical value of the response code. This conforms to BIND's logging practice.<br> -If the source provides only a numerical query type code and not a query type name, the parser must include a lookup table to enrich with this value.<br><br>Example: `AAAA`|
| <a name=responsename></a>**DnsResponseName** | Optional | String | The content of the response, as included in the record.<br> <br> The DNS response data is inconsistent across reporting devices, is complex to parse, and has less value for source-agnostic analytics. Therefore the information model doesn't require parsing and normalization, and Microsoft Sentinel uses an auxiliary function to provide response information. For more information, see [Handling DNS response](#handling-dns-response).| | <a name=responsecodename></a>**DnsResponseCodeName** | Alias | | Alias to [EventResultDetails](#eventresultdetails) | | **DnsResponseCode** | Optional | Integer | The [DNS numerical response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `3`|
sentinel Normalization Develop Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-develop-parsers.md
Handle the results as follows:
To make sure that your parser produces valid values, use the ASIM data tester by running the following query in the Microsoft Sentinel **Logs** page: ```KQL
- <parser name> | limit <X> | invoke ASimDataTester('<schema>')
+ <parser name> | limit <X> | invoke ASimDataTester ( ['<schema>'] )
```
+Specifying a schema is optional. If a schema is not specified, the `EventSchema` field is used to identify the schema the event should adhere to. Ig an event does not include an `EventSchema` field, only common fields will be verified. If a schema is specified as a parameter, this schema will be used to test all records. This is