Updates from: 03/03/2022 02:07:29
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-user-flow.md
To add a Conditional Access policy, disable security defaults:
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**.
+1. Under **Azure services**, select **Azure Active Directory**. Or use the search box to find and select **Azure Active Directory**.
1. Select **Properties**, and then select **Manage Security defaults**. ![Disable the security defaults](media/conditional-access-user-flow/disable-security-defaults.png)
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
For more information about accessing Azure AD B2C audit logs, see [Accessing Azu
When you want to manage Microsoft Graph, you can either do it as the application using the application permissions, or you can use delegated permissions. For delegated permissions, either the user or an administrator consents to the permissions that the app requests. The app is delegated with the permission to act as a signed-in user when it makes calls to the target resource. Application permissions are used by apps that do not require a signed in user present and thus require application permissions. Because of this, only administrators can consent to application permissions. > [!NOTE]
-> Delegated permissions for users signing in through user flows or custom policies cannot be used against delegated permissions for Microsoft Graph.
+> Delegated permissions for users signing in through user flows or custom policies cannot be used against delegated permissions for Microsoft Graph API.
## Code sample: How to programmatically manage user accounts This code sample is a .NET Core console application that uses the [Microsoft Graph SDK](/graph/sdks/sdks-overview) to interact with Microsoft Graph API. Its code demonstrates how to call the API to programmatically manage users in an Azure AD B2C tenant.
The `RunAsync` method in the _Program.cs_ file:
1. Initializes the auth provider using [OAuth 2.0 client credentials grant](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) flow. With the client credentials grant flow, the app is able to get an access token to call the Microsoft Graph API. 1. Sets up the Microsoft Graph service client with the auth provider:
- ```csharp
- // Read application settings from appsettings.json (tenant ID, app ID, client secret, etc.)
- AppSettings config = AppSettingsFile.ReadFromJsonFile();
-
- // Initialize the client credential auth provider
- IConfidentialClientApplication confidentialClientApplication = ConfidentialClientApplicationBuilder
- .Create(config.AppId)
- .WithTenantId(config.TenantId)
- .WithClientSecret(config.ClientSecret)
- .Build();
- ClientCredentialProvider authProvider = new ClientCredentialProvider(confidentialClientApplication);
-
- // Set up the Microsoft Graph service client with client credentials
- GraphServiceClient graphClient = new GraphServiceClient(authProvider);
- ```
The initialized *GraphServiceClient* is then used in _UserService.cs_ to perform the user management operations. For example, getting a list of the user accounts in the tenant:
-```csharp
-public static async Task ListUsers(GraphServiceClient graphClient)
-{
- Console.WriteLine("Getting list of users...");
-
- // Get all users (one page)
- var result = await graphClient.Users
- .Request()
- .Select(e => new
- {
- e.DisplayName,
- e.Id,
- e.Identities
- })
- .GetAsync();
-
- foreach (var user in result.CurrentPage)
- {
- Console.WriteLine(JsonConvert.SerializeObject(user));
- }
-}
-```
[Make API calls using the Microsoft Graph SDKs](/graph/sdks/create-requests) includes information on how to read and write information from Microsoft Graph, use `$select` to control the properties returned, provide custom query parameters, and use the `$filter` and `$orderBy` query parameters.
active-directory-b2c Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multi-factor-authentication.md
In Azure AD B2C, you can delete a user's TOTP authenticator app enrollment. Then
1. In the left menu, select **Users**. 1. Search for and select the user for which you want to delete TOTP authenticator app enrollment. 1. In the left menu, select **Authentication methods**.
-1. Under **Usable authentication methods**, find **Software OATH token (Preview)**, and then select the 3-dot menu next to it. If you don't see this interface, select **Switch to the new user authentication methods experience! Click here to use it now** to switch to the new authentication methods experience.
+1. Under **Usable authentication methods**, find **Software OATH token (Preview)**, and then select the ellipsis menu next to it. If you don't see this interface, select the option to **"Switch to the new user authentication methods experience! Click here to use it now"** to switch to the new authentication methods experience.
1. Select **Delete**, and then select **Yes** to confirm. :::image type="content" source="media/multi-factor-authentication/authentication-methods.png" alt-text="User authentication methods":::
Learn how to [delete a user's Software OATH token authentication method](/graph/
- Learn about the [TOTP display control](display-control-time-based-one-time-password.md) and [Azure AD MFA technical profile](multi-factor-auth-technical-profile.md)
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
You should now see BindID as a new OIDC Identity provider listed within your B2C
2. Select **New user flow**
-3. Select **Sign up and sign in** > **Version** **Reccomended** > **Create**.
+3. Select **Sign up and sign in** > **Version Recommended** > **Create**.
4. Enter a **Name** for your policy.
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Example: Based on the user's first name, middle name and last name, you need to
SingleAppRoleAssignment([appRoleAssignments]) **Description:**
-Returns a single appRoleAssignment from the list of all appRoleAssignments assigned to a user for a given application. This function is required to convert the appRoleAssignments object into a single role name string. The best practice is to ensure only one appRoleAssignment is assigned to one user at a time, and if multiple roles are assigned the role string returned may not be predictable.
+Returns a single appRoleAssignment from the list of all appRoleAssignments assigned to a user for a given application. This function is required to convert the appRoleAssignments object into a single role name string. The best practice is to ensure only one appRoleAssignment is assigned to one user at a time. This function is not supported in scenarios where users have multiple app role assignments.
**Parameters:**
active-directory Concept Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-registration-mfa-sspr-combined.md
Before combined registration, users registered authentication methods for Azure
> [!NOTE] > Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration.
-> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration. After this date tenants will be unable to utilize the sepperate legacy registration workflows for MFA and SSPR.
+> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration. After this date tenants will be unable to utilize the separate legacy registration workflows for MFA and SSPR.
This article outlines what combined security registration is. To get started with combined security registration, see the following article:
active-directory Howto Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined.md
Before combined registration, users registered authentication methods for Azure
> [!NOTE] > Starting on August 15th 2020, all new Azure AD tenants will be automatically enabled for combined registration. Tenants created after this date will be unable to utilize the legacy registration workflows.
-> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration. After this date tenants will be unable to utilize the sepperate legacy registration workflows for MFA and SSPR.
+> After Sept. 30th, 2022, all existing Azure AD tenants will be automatically enabled for combined registration. After this date tenants will be unable to utilize the separate legacy registration workflows for MFA and SSPR.
To make sure you understand the functionality and effects before you enable the new experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
active-directory Cloudknox All Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-all-reports.md
This article provides you with a list and description of the system reports avai
## Download a system report
-1. In the CloudKnox home page, select the **Reports** tab, and then select the **Systems reports** subtab.
+1. In the CloudKnox home page, select the **Reports** tab, and then select the **Systems Reports** subtab.
1. In the **Report Name** column, find the report you want, and then select the down arrow to the right of the report name to download the report. Or, from the ellipses **(...)** menu, select **Download**.
- The following message displays: **Successfully started to generate on demand report.**
+ The following message displays: **Successfully Started To Generate On Demand Report.**
## Summary of available system reports
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md
Previously updated : 02/24/2022 Last updated : 03/02/2022
First, create a new project in the Google Developers Console to obtain a client
1. You can leave your project at a publishing status of **Testing** and add test users to the OAuth consent screen. Or you can select the **Publish app** button on the OAuth consent screen to make the app available to any user with a Google Account.
+ > [!NOTE]
+ > In some cases, your app might require verification by Google (for example, if you update the application logo). For more information, see Google's [verification status help](https://support.google.com/cloud/answer/10311615#verification-status).
+ ## Step 2: Configure Google federation in Azure AD You'll now set the Google client ID and client secret. You can use the Azure portal or PowerShell to do so. Be sure to test your Google federation configuration by inviting yourself. Use a Gmail address and try to redeem the invitation with your invited Google account.
active-directory Invite Internal Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invite-internal-users.md
Previously updated : 09/10/2021 Last updated : 03/02/2022
Sending an invitation to an existing internal account lets you retain that user
> In Azure AD Connect sync, thereΓÇÖs a default rule that writes the [onPremisesUserPrincipalName attribute](../hybrid/reference-connect-sync-attributes-synchronized.md#notes) to the user object. Because the presence of this attribute can prevent a user from signing in using external credentials, we block internal-to-external conversions for user objects with this attribute. If youΓÇÖre using Azure AD Connect and you want to be able to invite internal users to B2B collaboration, you'll need to [modify the default rule](../hybrid/how-to-connect-sync-change-the-configuration.md) so the onPremisesUserPrincipalName attribute isnΓÇÖt written to the user object. ## How to invite internal users to B2B collaboration
-You can use PowerShell or the invitation API to send a B2B invitation to the internal user. Make sure the email address you want to use for the invitation is set as the external email address on the internal user object.
+You can use the Azure portal, PowerShell, or the invitation API to send a B2B invitation to the internal user. Some things to note:
-- You must use the the email address in the User.Mail property for the invitation.-- The domain in the userΓÇÖs Mail property must match the account theyΓÇÖre using to sign in. Otherwise, some services such as Teams won't be able to authenticate the user.
+- Before you invite the user, make sure the `User.Mail` property of the internal user object (the user's **Email** property in the Azure portal) is set to the external email address they'll use for B2B collaboration.
-By default, the invitation will send the user an email letting them know theyΓÇÖve been invited, but you can suppress this email and send your own instead.
+- When you invite the user, an invitation is sent to the user via email. If you're using PowerShell or the invitation API, you can suppress this email by setting `SendInvitationMessage` to `False`. Then you can notify the user in another way. [Learn more about the invitation API](customize-invitation-api.md).
-> [!NOTE]
-> To send your own email or other communication, you can use `New-AzureADMSInvitation` with `-SendInvitationMessage:$false` to invite users silently, and then send your own email message to the converted user. See [Azure AD B2B collaboration API and customization](customize-invitation-api.md).
+- When the user redeems the invitation, the account they're using must match the domain in the `User.Mail` property. Otherwise, some services, such as Teams, won't be able to authenticate the user.
+
+## Use the Azure portal to send a B2B invitation
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or User administrator account for the directory.
+1. Select the **Azure Active Directory** service.
+1. Select **Users**.
+1. Find the user in the list or use the search box. Then select the user.
+1. On the user's profile page, in the **Identity** section, select **Manage B2B collaboration**.
+
+ ![Screenshot of the user profile](media/invite-internal-users/manage-b2b-collaboration-link.png)
+
+ > [!NOTE]
+ > If you see **Invitation accepted** instead of **Manage B2B collaboration**, the user has already been invited to use external credentials for B2B collaboration.
+
+1. Next to **Invite internal user to B2B collaboration?** select **Yes**, and then select **Done**.
+
+ ![Screenshot showing the invite internal user radio button](media/invite-internal-users/invite-internal-user-selector.png)
+
+ > [!NOTE]
+ > If the option is unavailable, make sure the user's **Email** property is set to the external email address they should use for B2B collaboration.
+
+1. A confirmation message appears and an invitation is sent to the user via email. The user is then able to redeem the invitation using their external credentials.
## Use PowerShell to send a B2B invitation
active-directory Reset Redemption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/reset-redemption-status.md
If a user wants to sign in using a different email:
> ### Use PowerShell to reset redemption status
-Install the latest AzureADPreview PowerShell module and create a new invitation with `InvitedUserEmailAddress` set to the new email address, and `ResetRedemption` set to `true`.
-
-```powershell
-Uninstall-Module AzureADPreview
-Install-Module AzureADPreview
-Connect-AzureAD
-$ADGraphUser = Get-AzureADUser -objectID "UPN of User to Reset"
-$msGraphUser = New-Object Microsoft.Open.MSGraph.Model.User -ArgumentList $ADGraphUser.ObjectId
-New-AzureADMSInvitation -InvitedUserEmailAddress <<external email>> -SendInvitationMessage $True -InviteRedirectUrl "http://myapps.microsoft.com" -InvitedUser $msGraphUser -ResetRedemption $True
+```powershell
+Install-Module Microsoft.Graph
+Select-MgProfile -Name beta
+Connect-MgGraph
+
+$user = Get-MgUser -Filter "startsWith(mail, 'john.doe@fabrikam.net')"
+New-MgInvitation `
+ -InvitedUserEmailAddress $user.Mail `
+ -InviteRedirectUrl "http://myapps.microsoft.com" `
+ -ResetRedemption `
+ -SendInvitationMessage `
+ -InvitedUser $user
``` ### Use Microsoft Graph API to reset redemption status
active-directory Auth Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ssh.md
Previously updated : 10/19/2020 Last updated : 03/01/2022
SSH with Azure AD
## Implement SSH with Azure AD 
-* [Log in to a Linux® VM with Azure Active Directory credentials - Azure Virtual Machines ](../../virtual-machines/linux/login-using-aad.md)
+* [Log in to a Linux® VM with Azure Active Directory credentials - Azure Virtual Machines ](https://docs.microsoft.com/azure/active-directory/devices/howto-vm-sign-in-azure-ad-linux)
* [OAuth 2.0 device code flow - Microsoft identity platform ](../develop/v2-oauth2-device-code.md)
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
You can now automate creating, updating, and deleting user accounts for these ne
- [Gong](../saas-apps/gong-provisioning-tutorial.md) - [LanSchool Air](../saas-apps/lanschool-air-provisioning-tutorial.md) - [ProdPad](../saas-apps/prodpad-provisioning-tutorial.md)+ For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-integration.md
SHA addresses this blind spot by enabling organizations to continue using their
Having Azure AD pre-authenticate access to BIG-IP published services provides many benefits: -- Password-less authentication through [Windows Hello](/windows/security/identity-protection/hello-for-business/hello-overview), [MS Authenticator](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a), [Fast Identity Online (FIDO) keys](../authentication/howto-authentication-passwordless-security-key.md), and [Certificate-based authentication](../authentication/active-directory-certificate-based-authentication-get-started.md)
+- Password-less authentication through [Windows Hello](/windows/security/identity-protection/hello-for-business/hello-overview), [MS Authenticator](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a), [Fast Identity Online (FIDO) keys](../authentication/howto-authentication-passwordless-security-key.md), and [Certificate-based authentication](../authentication/concept-certificate-based-authentication.md)
- Preemptive [Conditional Access](../conditional-access/overview.md) and [Azure AD Multi-Factor Authentication (MFA)](../authentication/concept-mfa-howitworks.md)
Azure AD B2B guest access to SHA protected applications is also possible, but so
## Next steps
-Consider running an SHA Proof of concept (POC) using your existing BIG-IP infrastructure, or by [Deploying a BIG-IP Virtual Edition (VE) VM into Azure](f5-bigip-deployment-guide.md) takes approximately 30 minutes, at which point you'll have:
+Consider running a SHA Proof of concept (POC) using your existing BIG-IP infrastructure, or by [Deploying a BIG-IP Virtual Edition (VE) VM into Azure](f5-bigip-deployment-guide.md). Deploying a VM in Azure takes approximately 30 minutes, at which point you'll have:
-- A fully secured platform to model a SHA proof of concept
+- A fully secured platform to model a SHA pilot
- A pre-production instance for testing new BIG-IP system updates and hotfixes
-At the same time, you should identify one or two applications that can be published via the BIG-IP and protected with SHA.
+You should should also identify one or two applications that can be published via the BIG-IP and protected with SHA.
Our recommendation is to start with an application that isnΓÇÖt yet published via a BIG-IP, so as to avoid potential disruption to production services. The guidelines mentioned in this article will help you get acquainted with the general procedure for creating the various BIG-IP configuration objects and setting up SHA. Once complete you should be able to do the same with any other new services, plus also have enough knowledge to convert existing BIG-IP published services over to SHA with minimal effort.
active-directory Overview Flagged Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-flagged-sign-ins.md
na Previously updated : 11/12/2021 Last updated : 03/02/2022
active-directory Recommendation Integrate Third Party Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-integrate-third-party-apps.md
+
+ Title: Azure Active Directory recommendation - Integrate third party apps with Azure AD | Microsoft Docs
+description: Learn why you should integrate third party apps with Azure AD
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
++
+ na
++ Last updated : 03/02/2022++++++
+# Azure AD recommendation: Integrate your third party apps
+
+[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
+
+This article covers the recommendation to integrate third party apps.
++
+## Description
+
+As an Azure AD admin responsible for managing applications, you want to use the Azure AD security features with your third party apps. Integrating these apps into Azure AD enables:
+
+- You to use one unified method to manage access to your third party apps.
+- Your users to benefit from using single sign-on to access all your apps with a single password.
++
+## Logic
+
+If Azure AD determines that none of your users are using Azure AD to authenticate to your third party apps, this recommendation shows up.
+
+## Value
+
+Integrating third party apps with Azure AD allows you to use Azure AD's security features.
+The integration:
+- Improves the productivity of your users.
+
+- Lowers your app management cost.
+
+You can then add an extra security layer by using conditional access to control how your users can access your apps.
+
+## Action plan
+
+1. Review the configuration of your apps.
+2. For each app that isn't integrated into Azure AD yet, verify whether an integration is possible.
+
+
+## Next steps
+
+- [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md)
+- [Azure AD reports overview](overview-reports.md)
active-directory Recommendation Migrate To Authenticator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-to-authenticator.md
+
+ Title: Azure Active Directory recommendation - Migrate to Microsoft authenticator | Microsoft Docs
+description: Learn why you should migrate your users to the Microsoft authenticator app in Azure AD.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
++
+ na
++ Last updated : 03/02/2022++++++
+# Azure AD recommendation: Migrate to Microsoft authenticator
+
+[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
+
+This article covers the recommendation to migrate users to authenticator.
++
+## Description
+
+Multi-factor authentication (MFA) is a key component to improve the security posture of your Azure AD tenant. However, while keeping your tenant safe is important, you should also keep an eye on keeping the security related overhead as little as possible on your users.
+
+One possibility to accomplish this goal is to migrate users using SMS or voice call for MFA to use the Microsoft authenticator app.
++
+## Logic
+
+If Azure AD detects that your tenant has users authenticating using SMS or voice in the past week instead of the authenticator app, this recommendation shows up.
+
+## Value
+
+- Push notifications through the Microsoft authenticator app provide the least intrusive MFA experience for users. This is the most reliable and secure option because it relies on a data connection rather than telephony.
+- Verification code option using Microsoft authenticator app enables MFA even in isolated environments without data or cellular signals where SMS and Voice calls would not work.
+- The Microsoft authenticator app is available for Android and iOS.
+- Pathway to passwordless: Authenticator can be a traditional MFA factor (one-time passcodes, push notification) and when your organization is ready for Password-less, the authenticator app can be used sign-into Azure AD without a password.
+
+## Action plan
+
+1. Ensure that notification through mobile app and/or verification code from mobile app are available to users as authentication methods. How to Configure Verification Options
+
+2. Educate users on how to add a work or school account.
+++
+
+
+## Next steps
+
+- [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md)
+- [Azure AD reports overview](overview-reports.md)
active-directory Confluencemicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluencemicrosoft-tutorial.md
As of now, following versions of Confluence are supported:
- Confluence: 5.0 to 5.10 - Confluence: 6.0.1 to 6.15.9-- Confluence: 7.0.1 to 7.15.0
+- Confluence: 7.0.1 to 7.16.2
> [!NOTE] > Please note that our Confluence Plugin also works on Ubuntu Version 16.04
active-directory Jiramicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
Use your Microsoft Azure Active Directory account with Atlassian JIRA server to
To configure Azure AD integration with JIRA SAML SSO by Microsoft, you need the following items: - An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-- JIRA Core and Software 6.4 to 8.21 or JIRA Service Desk 3.0 to 4.21.0 should installed and configured on Windows 64-bit version
+- JIRA Core and Software 6.4 to 8.22.0 or JIRA Service Desk 3.0 to 4.22.0 should installed and configured on Windows 64-bit version
- JIRA server is HTTPS enabled - Note the supported versions for JIRA Plugin are mentioned in below section. - JIRA server is reachable on the Internet particularly to the Azure AD login page for authentication and should able to receive the token from Azure AD
To get started, you need the following items:
## Supported versions of JIRA
-* JIRA Core and Software: 6.4 to 8.21
-* JIRA Service Desk 3.0 to 4.21.0
+* JIRA Core and Software: 6.4 to 8.22.0
+* JIRA Service Desk 3.0 to 4.22.0
* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](jira52microsoft-tutorial.md) > [!NOTE]
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
Previously updated : 04/01/2021 Last updated : 02/22/2022 #Customer intent: Why are we doing this?
> [!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-In this article:
-> [!div class="checklist"]
-> * Why do we need to link our DID to our domain?
-> * How do we link DIDs and domains?
-> * How does the domain linking process work?
-> * How does the verify/unverified domain logic work?
## Prerequisites
To link your DID to your domain, you need to have completed the following.
## Why do we need to link our DID to our domain?
-A DID starts out as an identifier that is not anchored to existing systems. A DID is useful because a user or organization can own it and control it. If an entity interacting with the organization does not know 'who' the DID belongs to, then the DID is not as useful.
+A DID starts out as an identifier that isn't anchored to existing systems. A DID is useful because a user or organization can own it and control it. If an entity interacting with the organization doesn't know 'who' the DID belongs to, then the DID isn't as useful.
Linking a DID to a domain solves the initial trust problem by allowing any entity to cryptographically verify the relationship between a DID and a Domain.
+## When do you need to update the domain in your DID?
+
+In the event where the domain associated with your company changes, you would also need to change the domain in your DID document that is also published in the ION network. You can update the domain in your DID directly from the Azure AD Verifiable Credential portal.
+ ## How do we link DIDs and domains?
-We make a link between a domain and a DID by implementing an open standard written by the Decentralized Identity Foundation called [Well-Known DID configuration](https://identity.foundation/.well-known/resources/did-configuration/). The verifiable credentials service in Azure Active Directory (Azure AD) helps your organization make the link between the DID and domain by including the domain information that you provided in your DID, and generating the well-known config file:
+We follow the [Well-Known DID configuration](https://identity.foundation/.well-known/resources/did-configuration/) specification when creating the link. The verifiable credentials service links your DID and domain. The service includes the domain information that you provided in your DID, and generates the well-known config file:
1. Azure AD uses the domain information you provide during organization setup to write a Service Endpoint within the DID Document. All parties who interact with your DID can see the domain your DID proclaims to be associated with. -
+
```json "service": [ {
We make a link between a domain and a DID by implementing an open standard writt
} ```
-After you have the well-known configuration file, you need to make the file available using the domain name you specified when enabling your AAD for verifiable credentials.
+After you have the well-known configuration file, you need to make the file available using the domain name you specified when you enabled your Azure AD for verifiable credentials.
-* Host the well-known DID configuration file at the root of the domain.
-* Do not use redirects.
-* Use https to distribute the configuration file.
+- Host the well-known DID configuration file at the root of the domain.
+- Don't use redirects.
+- Use https to distribute the configuration file.
>[!IMPORTANT] >Microsoft Authenticator does not honor redirects, the URL specified must be the final destination URL.
-## User experience
+## User experience in the wallet
-When a user is going through an issuance flow or presenting a verifiable credential, they should know something about organization and its DID. If the domain our verifiable credential wallet, Microsoft Authenticator, validates a DID's relationship with the domain in the DID document and presents users with two different experiences depending on the outcome.
+When a user is going through an issuance flow or presenting a verifiable credential, they should know something about the organization and its DID. Microsoft Authenticator, validates a DID's relationship with the domain in the DID document and presents users with two different experiences depending on the outcome.
## Verified domain Before Microsoft Authenticator displays a **Verified** icon, a few things need to be true: * The DID signing the self-issued open ID (SIOP) request must have a Service endpoint for Linked Domain.
-* The root domain does not use a redirect and uses https.
+* The root domain doesn't use a redirect and uses https.
* The domain listed in the DID Document has a resolvable well-known resource. * The well-known resource's verifiable credential is signed with the same DID that was used to sign the SIOP that Microsoft Authenticator used to kick start the flow.
If all of the previously mentioned are true, then Microsoft Authenticator displa
## Unverified domain
-If any of the above are not true, the Microsoft Authenticator will display a full page warning to the user that the domain is unverified, the user is in the middle of a risky transaction and they should proceed with caution. We have chosen to take this route because:
+If any of the above aren't true, Microsoft Authenticator displays a full page warning to the user indicating that the domain is unverified. The user is warned that they are in the middle of a potential risky transaction and they should proceed with caution. We have chosen to take this route because:
* The DID is either not anchored to a domain.
-* The configuration was not set up properly.
-* The DID the user is interacting with is malicious and actually can't prove they own a domain (since they actually don't). Due to this last point, it is imperative that you link your DID to the domain the user is familiar with, to avoid propagating the warning message.
+* The configuration wasn't set up properly.
+* The DID that the user is interacting with could be malicious and actually can't prove that they own the domain linked.
+
+It is of high importance that you link your DID to a domain recognizable to the user.
![unverified domain warning in the add credential screen](media/how-to-dnsbind/add-credential-not-verified-authenticated.png)
-## Distribute well-known config
+## How do you update the linked domain on your DID?
+
+1. Navigate to the Verifiable Credentials | Getting Started page.
+1. On the left side of the page select **Domain**.
+1. In the Domain box, enter your new domain name.
+1. Select **Publish**.
++
+It might take up to two hours for your DID document to be updated in the [ION network](https://identity.foundation/ion) with the new domain information. No other changes to the domain are possible before the changes are published.
+
+>[!NOTE]
+>If your changes are successful you will need to [verify](#verified-domain) your newly added domain.
-1. Navigate to the Settings page in Verifiable Credentials and choose **Verify this domain**
- ![Verify this domain in settings](media/how-to-dnsbind/settings-verify.png)
+
+### Do I need to wait for my DID Doc to be updated to verify my newly added domains?
+
+Yes. You need to wait until the config.json file gets updated before you publish it using your domain's hosting location.
+
+### How do I know when the linked domain update has successfully completed?
+
+Once the domain changes are publised to ION, the domain section inside the Azure AD Verifiable Credentials service will display `Published` as the status and you should be able to make new changes to the domain.
+
+>[!IMPORTANT]
+> No changes to your domain are possible while publishing is in progress.
+
+## Distribute well-known config
+
+1. From the Azure portal, navigate to the Verifiable Credentials page. Select **Domain** and choose **Verify this domain**
2. Download the did-configuration.json file shown in the image below.
Congratulations, you now have bootstrapped the web of trust with your DID!
## Next steps
-If during onboarding you enter the wrong domain information or if you decide to change it, you will need to [opt out](how-to-opt-out.md). At this time, we don't support updating your DID document. Opting out and opting back in will create a brand new DID.
+- [How to customize your Azure Active Directory Verifiable Credentials](credential-design.md)
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Previously updated : 02/11/2022 Last updated : 02/22/2022
This article lists the latest features, improvements, and changes in the Azure Active Directory (Azure AD) Verifiable Credentials service.
+## March 2022
+- Azure AD Verifiable Credentials customers can now change the [domain linked](how-to-dnsbind.md) to their DID easily from the Azure Portal.
+ ## February 2022
-We are rolling out some important updates to our service that are breaking changes and require Azure AD Verifiable Credentials service reconfiguration and re-issuance of verifiable credentials for end-users.
+We are rolling out some breaking changes to our service. These updates require Azure AD Verifiable Credentials service reconfiguration. End-users need to have their verifiable credentials reissued.
- The Azure AD Verifiable Credentials service can now store and handle data processing in the Azure European region. [More information](whats-new.md?#azure-ad-verifiable-credentials-available-in-europe)-- Azure AD Verifiable Credentials customers can take advantage of enhancements to credential revocation that add a higher degree of privacy through the implementation of the [W3C Status List 2021](https://w3c-ccg.github.io/vc-status-list-2021/) standard. [More information](whats-new.md?#credential-revocation-with-enhanced-privacy)-- We made protocol updates in Microsoft Authenticator in the interaction between the Issuer of a verifiable credential and the user preenting the verifiable credential. This update will force all Verifiable Crecentials to be re-issued in Microsoft Authenticator for Android. [More information](whats-new.md?#microsoft-authenticator-android-did-generation-update)
+- Azure AD Verifiable Credentials customers can take advantage of enhancements to credential revocation. These changes add a higher degree of privacy through the implementation of the [W3C Status List 2021](https://w3c-ccg.github.io/vc-status-list-2021/) standard. [More information](whats-new.md?#credential-revocation-with-enhanced-privacy)
+- We made updates to Microsoft Authenticator that change the interaction between the Issuer of a verifiable credential and the user presenting the verifiable credential. This update forces all Verifiable Credentials to be reissued in Microsoft Authenticator for Android. [More information](whats-new.md?#microsoft-authenticator-android-did-generation-update)
>[!IMPORTANT] > All Azure AD Verifiable Credential customers receiving a banner notice in the Azure portal need to go through a service reconfiguration before March 31st 2022. On March 31st 2022 tenants that have not been reconfigured will lose access to any previous configuration. Administrators will have to set up a new instance of the Azure AD Verifiable Credential service. Learn more about how to [reconfigure your tenant](verifiable-credentials-faq.md?#how-do-i-reconfigure-the-azure-ad-verifiable-credentials-service).
We are rolling out some important updates to our service that are breaking chang
Since the beginning of the Azure AD Verifiable Credentials service public preview, the service has only been available in our Azure North America region. Now, the service is also available in our Azure Europe region. - New customers with Azure AD European tenants now have their Verifiable Credentials data located and processed in our Azure Europe region.-- Customers with Azure AD tenants setup in Europe who start using the Azure AD Verifiable Credentials service after February 15, 2022, have their data automatically processed in Europe and don't need to take any further actions.
+- Customers with Azure AD tenants setup in Europe who start using the Azure AD Verifiable Credentials service after February 15, 2022, have their data automatically processed in Europe. There's no need to take any further actions.
- Customers with Azure AD tenants setup in Europe that started using the Azure AD Verifiable Credentials service before February 15, 2022, are required to reconfigure the service on their tenants before March 31, 2022. Take the following steps to configure the Verifiable Credentials service in Europe:
Sample contract file:
} ```
-3. You have to issue new verifiable credentials using your new configuration. All verifiable credentials previously issued will continue to exist as your previous DID will remain resolvable however, they use the previous status endpoint implementation.
+3. You have to issue new verifiable credentials using your new configuration. All verifiable credentials previously issued continue to exist. Your previous DID remains resolvable however, they use the previous status endpoint implementation.
>[!IMPORTANT] > You have to reconfigure your Azure AD Verifiable Credential service instance to create your new Identity hub endpoint. You have until March 31st 2022, to schedule and manage the reconfiguration of your deployment. On March 31st, 2022 deployments that have not been reconfigured will lose access to any previous Azure AD Verifiable Credentials service configuration. Administrators will need to set up a new service instance. ### Microsoft Authenticator Android DID Generation Update
-We are making protocol updates in Microsoft Authenticator to support Single Long Form DID, thus deprecating the use of pairwise. With this update your DID in Microsoft Authenticator will be used of every issuer and relaying party exchange. Holders of verifiable crendentials in Microsoft Authenticator for Android must get their verifiable credentials re-issued as any previous credentials are not going to continue working.
+We are making protocol updates in Microsoft Authenticator to support Single Long Form DID, thus deprecating the use of pairwise. With this update, your DID in Microsoft Authenticator will be used of every issuer and relaying party exchange. Holders of verifiable credentials using Microsoft Authenticator for Android must get their verifiable credentials reissued as any previous credentials aren't going to continue working.
## December 2021 - We added [Postman collections](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/Postman) to our samples as a quick start to start using the Request Service REST API. - New sample added that demonstrates the integration of [Azure AD Verifiable Credentials with Azure AD B2C](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/B2C).-- Fastrack setup sample for setting up the Azure AD Verifiable Credentials services using [PowerShell and an ARM template](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/ARM).
+- Sample for setting up the Azure AD Verifiable Credentials services using [PowerShell and an ARM template](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/ARM).
- Sample Verifiable Credential configuration files to show sample cards for [IDToken](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/CredentialFiles/IDToken), [IDTokenHit](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/CredentialFiles/IDTokenHint) and [Self-attested](https://github.com/Azure-Samples/active-directory-verifiable-credentials/tree/main/CredentialFiles/IDTokenHint) claims. ## November 2021
Callback types enforcing rules so that URL endpoints for callbacks are reachable
## October 2021
-You can now use [Request Service REST API](get-started-request-api.md) to build applications that can issue and verify credentials from any programming language you're using. This new REST API provides an improved abstraction layer and integration to the Azure AD Verifiable Credentials Service.
+You can now use [Request Service REST API](get-started-request-api.md) to build applications that can issue and verify credentials from any programming language. This new REST API provides an improved abstraction layer and integration to the Azure AD Verifiable Credentials Service.
It's a good idea to start using the API soon, because the NodeJS SDK will be deprecated in the following months. Documentation and samples now use the Request Service REST API. For more information, see [Request Service REST API (preview)](get-started-request-api.md). ## April 2021
-You can now issue [verifiable credentials](decentralized-identifier-overview.md) in Azure AD. This service is useful when you need to represent proof of employment, education, or any other claim, so that the holder of such a credential can decide when, and with whom, to share their credentials. Each credential is signed by using cryptographic keys associated with the decentralized identity that the user owns and controls.
+You can now issue [verifiable credentials](decentralized-identifier-overview.md) in Azure AD. This service is useful when you need to present proof of employment, education, or any other claim. The holder of such a credential can decide when, and with whom, to share their credentials. Each credential is signed by using cryptographic keys associated with the decentralized identity that the user owns and controls.
api-management How To Configure Cloud Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-cloud-metrics-logs.md
The self-hosted gateway currently does not send [diagnostic logs](./api-manageme
If a gateway is deployed in [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/), you can enable [Azure Monitor for containers](../azure-monitor/containers/container-insights-overview.md) to collect logs from your containers and view them in Log Analytics. - ## Next steps
+* To learn more about the [observability capabilities of the Azure API Management gateways](observability.md).
* To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md)
-* Learn about [configuring and persisting logs locally](how-to-configure-local-metrics-logs.md)
+* Learn about [configuring and persisting logs locally](how-to-configure-local-metrics-logs.md)
api-management How To Configure Local Metrics Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-local-metrics-logs.md
Here is a sample configuration of local logging:
## Next steps
+* To learn more about the [observability capabilities of the Azure API Management gateways](observability.md).
* To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md)
-* Learn about [configuring and persisting logs in the cloud](how-to-configure-local-metrics-logs.md)
+* Learn about [configuring and persisting logs in the cloud](how-to-configure-cloud-metrics-logs.md)
api-management How To Deploy Self Hosted Gateway Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md
To enable monitoring of the self-hosted gateway, configure the following Log Ana
## Next Steps * To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md).
+* Learn more about the [observability capabilities of the Azure API Management gateways](observability.md).
* Discover all [Azure Arc-enabled Kubernetes extensions](../azure-arc/kubernetes/extensions.md). * Learn more about [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). * Learn more about guidance to [run the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md).
api-management How To Deploy Self Hosted Gateway Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md
This article provides the steps for deploying self-hosted gateway component of A
* To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md). * Learn [how to deploy API Management self-hosted gateway to Azure Arc-enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md).
+* Learn more about the [observability capabilities of the Azure API Management gateways](observability.md).
* Learn more about guidance to [run the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md). * Learn more about [Azure Kubernetes Service](../aks/intro-kubernetes.md).
-* Learn [how to configure and persist logs in the cloud](how-to-configure-cloud-metrics-logs.md).
-* Learn [how to configure and persist logs locally](how-to-configure-local-metrics-logs.md).
api-management How To Deploy Self Hosted Gateway Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md
This article provides the steps for deploying self-hosted gateway component of A
* Learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md). * Learn more about guidance for [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md). * Learn [how to deploy API Management self-hosted gateway to Azure Arc-enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md).
-* Learn [how to configure and persist logs in the cloud](how-to-configure-cloud-metrics-logs.md).
-* Learn [how to configure and persist logs locally](how-to-configure-local-metrics-logs.md).
+* Learn more about the [observability capabilities of the Azure API Management gateways](observability.md).
[helm]: https://helm.sh/ [helm-install]: https://helm.sh/docs/intro/install/
api-management How To Deploy Self Hosted Gateway Kubernetes Opentelemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md
+
+ Title: Deploy self-hosted gateway to Kubernetes with OpenTelemetry integration
+description: Learn how to deploy a self-hosted gateway component of Azure API Management on Kubernetes with OpenTelemetry
++++++ Last updated : 12/17/2021++
+# Deploy self-hosted gateway to Kubernetes with OpenTelemetry integration
+
+This article describes the steps for deploying the self-hosted gateway component of Azure API Management to a Kubernetes cluster and automatically send all metrics to an [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/).
++
+You learn how to:
+
+> [!div class="checklist"]
+> * Configure and deploy a standalone OpenTelemetry Collector on Kubernetes
+> * Deploy the self-hosted gateway with OpenTelemetry metrics.
+> * Generate metrics by consuming APIs on the self-hosted gateway.
+> * Use the metrics from the OpenTelemetry Collector.
+
+## Prerequisites
+
+- [Create an Azure API Management instance](get-started-create-service-instance.md)
+- [Create an Azure Kubernetes cluster](../aks/kubernetes-walkthrough-portal.md)
+- [Provision a self-hosted gateway resource in your API Management instance](api-management-howto-provision-self-hosted-gateway.md).
++
+## Introduction to OpenTelemetry
+
+[OpenTelemetry](https://opentelemetry.io/) is a set of open-source tools and frameworks for logging, metrics, and tracing in a vendor-neutral way.
++
+The self-hosted gateway can be configured to automatically collect and send metrics to an [OpenTelemetry Collector](https://opentelemetry.io/docs/concepts/components/#collector). This allows you to bring your own metrics collection and reporting solution for the self-hosted gateway.
+
+> [!NOTE]
+> OpenTelemetry is an incubating project of the [Cloud Native Computing Foundation (CNCF) ecosystem](https://www.cncf.io/).
+
+### Metrics
+
+The self-hosted gateway will automatically start measuring the following metrics:
+
+- Requests
+- DurationInMs
+- BackendDurationInMs
+- ClientDurationInMs
+- GatewayDurationInMs
+
+They are automatically exported to the configured OpenTelemetry Collector every 1 minute with additional dimensions.
+
+## Deploy the OpenTelemetry Collector
+
+We will start by deploying a standalone OpenTelemetry Collector on Kubernetes by using Helm.
+
+> [!TIP]
+> While we will be using the Collector Helm chart, they also provide an [OpenTelemetry Collector Operator](https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-operator)
+
+To start with, we have to add the Helm chart repository:
+1. Add the Helm repository
+
+ ```console
+ helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
+ ```
+
+2. Update repo to fetch the latest Helm charts.
+
+ ```console
+ helm repo update
+ ```
+
+3. Verify your Helm configuration by listing all available charts.
+
+ ```console
+ $ helm search repo open-telemetry
+ NAME CHART VERSION APP VERSION DESCRIPTION
+ open-telemetry/opentelemetry-collector 0.8.1 0.37.1 OpenTelemetry Collector Helm chart for Kubernetes
+ open-telemetry/opentelemetry-operator 0.4.0 0.37.0 OpenTelemetry Operator Helm chart for Kubernetes
+ ```
+
+Now that we have the chart repository configured, we can deploy the OpenTelemetry Collector to our cluster:
+
+1. Create a local configuration file called `opentelemetry-collector-config.yml` with the following configuration:
+
+ ```yaml
+ agentCollector:
+ enabled: false
+ standaloneCollector:
+ enabled: true
+ config:
+ exporters:
+ prometheus:
+ endpoint: "0.0.0.0:8889"
+ namespace: azure_apim
+ send_timestamps: true
+ service:
+ pipelines:
+ metrics:
+ exporters:
+ - prometheus
+ service:
+ type: LoadBalancer
+ ports:
+ jaeger-compact:
+ enabled: false
+ prom-exporter:
+ enabled: true
+ containerPort: 8889
+ servicePort: 8889
+ protocol: TCP
+ ```
+
+This allows us to use a standalone collector with the Prometheus exporter being exposed on port `8889`. To expose the Prometheus metrics, we are asking the Helm chart to configure a ┬┤LoadBalancer` service.
+
+> [!NOTE]
+> We are disabling the compact Jaeger port given it uses UDP and `LoadBalancer` service does not allow you to have multiple protocols at the same time.
+
+2. Install the Helm chart with our configuration:
+
+ ```console
+ helm install opentelemetry-collector open-telemetry/opentelemetry-collector --values .\opentelemetry-collector-config.yml
+ ```
+
+3. Verify the installation by getting all the resources for our Helm chart
+
+ ```console
+ $ kubectl get all -l app.kubernetes.io/instance=opentelemetry-collector
+ NAME READY STATUS RESTARTS AGE
+ pod/opentelemetry-collector-58477c8c89-dstwd 1/1 Running 0 27m
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ service/opentelemetry-collector LoadBalancer 10.0.175.135 20.103.18.53 14250:30982/TCP,14268:32461/TCP,4317:31539/TCP,4318:31581/TCP,8889:32420/TCP,9411:30003/TCP 27m
+
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ deployment.apps/opentelemetry-collector 1/1 1 1 27m
+
+ NAME DESIRED CURRENT READY AGE
+ replicaset.apps/opentelemetry-collector-58477c8c89 1 1 1 27m
+ ```
+
+4. Take note of the external IP of the service, so we can query it later on.
+
+With our OpenTelemetry Collector installed, we can now deploy the self-hosted gateway to our cluster.
+
+## Deploy the self-hosted gateway
+
+> [!IMPORTANT]
+> For a detailed overview on how to deploy the self-hosted gateway with Helm and how to get the required configuration, we recommend reading [this article](how-to-deploy-self-hosted-gateway-kubernetes-helm.md).
+
+In this section, we will deploy the self-hosted gateway to our cluster with Helm and configure it to send OpenTelemetry metrics to the OpenTelemetry Collector.
+
+1. Install the Helm chart and configure it to use OpenTelemetry metrics:
+
+ ```console
+ helm install azure-api-management-gateway \
+ --set gateway.configuration.uri='<your configuration url>' \
+ --set gateway.auth.key='<your auth token>' \
+ --set observability.opentelemetry.enabled=true \
+ --set observability.opentelemetry.collector.uri=http://opentelemetry-collector:4317 \
+ --set service.type=LoadBalancer \
+ azure-apim-gateway/azure-api-management-gateway
+ ```
+
+> [!NOTE]
+> `opentelemetry-collector` in the command above is the name of the OpenTelemetry Collector. Update the name if your service has a different name.
+
+2. Verify the installation by getting all the resources for our Helm chart
+
+ ```console
+ $ kubectl get all -l app.kubernetes.io/instance=apim-gateway
+ NAME READY STATUS RESTARTS AGE
+ pod/apim-gateway-azure-api-management-gateway-fb77c6d49-rffwq 1/1 Running 0 63m
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ service/apim-gateway-azure-api-management-gateway LoadBalancer 10.0.67.177 20.71.82.110 8080:32267/TCP,8081:32065/TCP 63m
+
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ deployment.apps/apim-gateway-azure-api-management-gateway 1/1 1 1 63m
+
+ NAME DESIRED CURRENT READY AGE
+ replicaset.apps/apim-gateway-azure-api-management-gateway-fb77c6d49 1 1 1 63m
+ ```
+
+3. Take note of the external IP of the self-hosted gateway's service, so we can query it later on.
+
+## Generate and consume the OpenTelemetry metrics
+
+Now that both our OpenTelemetry Collector and the self-hosted gateway are deployed, we can start consuming the APIs to generate metrics.
+
+> [!NOTE]
+> We will be consuming the default "Echo API" for this walkthrough.
+>
+> Make sure that it is configured to:
+> - Allow HTTP requests
+> - Allow your self-hosted gateway to expose it
+
+1. Query the Echo API in the self-hosted gateway:
+
+ ```console
+ $ curl -i "http://<self-hosted-gateway-ip>:8080/echo/resource?param1=sample&subscription-key=abcdef0123456789"
+ HTTP/1.1 200 OK
+ Date: Mon, 20 Dec 2021 12:58:09 GMT
+ Server: Microsoft-IIS/8.5
+ Content-Length: 0
+ Cache-Control: no-cache
+ Pragma: no-cache
+ Expires: -1
+ Accept: */*
+ Host: echoapi.cloudapp.net
+ User-Agent: curl/7.68.0
+ X-Forwarded-For: 10.244.1.1
+ traceparent: 00-3192030c89fd7a60ef4c9749d6bdef0c-f4eeeee46f770061-01
+ Request-Id: |3192030c89fd7a60ef4c9749d6bdef0c.f4eeeee46f770061.
+ Request-Context: appId=cid-v1:c24f5e00-aa25-47f2-bbb5-035847e7f52a
+ X-Powered-By: Azure API Management - http://api.azure.com/,ASP.NET
+ X-AspNet-Version: 4.0.30319
+ ```
+
+The self-hosted gateway will now measure the request and send the metrics to the OpenTelemetry Collector.
+
+2. Query Prometheus endpoint on collector on `http://<collector-service-ip>:8889/metrics`. You should see metrics similar to the following:
+
+ ```raw
+ # HELP azure_apim_BackendDurationInMs
+ # TYPE azure_apim_BackendDurationInMs histogram
+ azure_apim_BackendDurationInMs_bucket{Hostname="20.71.82.110",le="5"} 0 1640093731340
+ [...]
+ azure_apim_BackendDurationInMs_count{Hostname="20.71.82.110"} 22 1640093731340
+ # HELP azure_apim_ClientDurationInMs
+ # TYPE azure_apim_ClientDurationInMs histogram
+ azure_apim_ClientDurationInMs_bucket{Hostname="20.71.82.110",le="5"} 22 1640093731340
+ [...]
+ azure_apim_ClientDurationInMs_count{Hostname="20.71.82.110"} 22 1640093731340
+ # HELP azure_apim_DurationInMs
+ # TYPE azure_apim_DurationInMs histogram
+ azure_apim_DurationInMs_bucket{Hostname="20.71.82.110",le="5"} 0 1640093731340
+ [...]
+ azure_apim_DurationInMs_count{Hostname="20.71.82.110"} 22 1640093731340
+ # HELP azure_apim_GatewayDurationInMs
+ # TYPE azure_apim_GatewayDurationInMs histogram
+ azure_apim_GatewayDurationInMs_bucket{Hostname="20.71.82.110",le="5"} 0 1640093731340
+ [...]
+ azure_apim_GatewayDurationInMs_count{Hostname="20.71.82.110"} 22 1640093731340
+ # HELP azure_apim_Requests
+ # TYPE azure_apim_Requests counter
+ azure_apim_Requests{BackendResponseCode="200",BackendResponseCodeCategory="2xx",Cache="None",GatewayId="Docs",Hostname="20.71.82.110",LastErrorReason="None",Location="GitHub",ResponseCode="200",ResponseCodeCategory="2xx",Status="Successful"} 22 1640093731340
+ ```
+
+## Cleaning up
+
+Now that the tutorial is over, you can easily clean up your cluster as following:
+
+1. Uninstall the self-hosted gateway Helm chart:
+
+ ```console
+ helm uninstall apim-gateway
+ ```
+
+2. Uninstall the OpenTelemetry Collector:
+
+ ```console
+ helm uninstall opentelemetry-collector
+ ```
+
+## Next steps
+
+- To learn more about the self-hosted gateway, see [Self-hosted gateway overview](self-hosted-gateway-overview.md).
+* To learn more about the [observability capabilities of the Azure API Management gateways](observability.md).
+- [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md)
api-management Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/observability.md
Observability is the ability to understand the internal state of a system from the data it produces and the ability to explore that data to answer questions about what happened and why.
-Azure API Management helps organizations centralize the management of all APIs. Since it serves as a single point of entry of all API traffic, it is an ideal place to observe the APIs.
+Azure API Management helps organizations centralize the management of all APIs. Since it serves as a single point of entry of all API traffic, it is an ideal place to observe the APIs.
-## Observability Tools
+## Overview
-The table below summarizes all the tools supported by API Management to observe APIs, each is useful for one or more scenarios:
+Azure API Management allows you to choose use the managed gateway or [self-hosted gateway](self-hosted-gateway-overview.md), either self-deployed or by using an [Azure Arc extension](how-to-deploy-self-hosted-gateway-azure-arc.md).
-| Tool | Useful for | Data lag | Retention | Sampling | Data kind | Enabled|
-|:- |:-|:- |:-|:- |: |:-
-| **[API Inspector](api-management-howto-api-inspector.md)** | Testing and debugging | Instant | Last 100 traces | Turned on per request | Request traces | Always
-| **Built-in Analytics** | Reporting and monitoring | Minutes | Lifetime | 100% | Reports and logs | Always |
-| **[Azure Monitor Metrics](api-management-howto-use-azure-monitor.md)** | Reporting and monitoring | Minutes | 93 days (upgrade to extend) | 100% | Metrics | Always |
-| **[Azure Monitor Logs](api-management-howto-use-azure-monitor.md)** | Reporting, monitoring, and debugging | Minutes | 31 days/5GB (upgrade to extend) | 100% (adjustable) | Logs | Optional |
-| **[Azure Application Insights](api-management-howto-app-insights.md)** | Reporting, monitoring, and debugging | Seconds | 90 days/5GB (upgrade to extend) | Custom | Logs, metrics | Optional |
-| **[Logging through Azure Event Hub](api-management-howto-log-event-hubs.md)** | Custom scenarios | Seconds | User managed | Custom | Custom | Optional |
+The table below summarizes all the observability capabilities supported by API Management to operate APIs and what deployment models they support.
-### Self-hosted gateway
+| Tool | Useful for | Data lag | Retention | Sampling | Data kind | Supported Deployment Model(s) |
+|:- |:-|:- |:-|:- |: |:- |
+| **[API Inspector](api-management-howto-api-inspector.md)** | Testing and debugging | Instant | Last 100 traces | Turned on per request | Request traces | Managed, Self-hosted, Azure Arc |
+| **Built-in Analytics** | Reporting and monitoring | Minutes | Lifetime | 100% | Reports and logs | Managed |
+| **[Azure Monitor Metrics](api-management-howto-use-azure-monitor.md)** | Reporting and monitoring | Minutes | 90 days (upgrade to extend) | 100% | Metrics | Managed, Self-hosted<sup>2</sup>, Azure Arc |
+| **[Azure Monitor Logs](api-management-howto-use-azure-monitor.md)** | Reporting, monitoring, and debugging | Minutes | 31 days/5GB (upgrade to extend) | 100% (adjustable) | Logs | Managed<sup>1</sup>, Self-hosted<sup>3</sup>, Azure Arc<sup>3</sup> |
+| **[Azure Application Insights](api-management-howto-app-insights.md)** | Reporting, monitoring, and debugging | Seconds | 90 days/5GB (upgrade to extend) | Custom | Logs, metrics | Managed<sup>1</sup>, Self-hosted<sup>1</sup>, Azure Arc<sup>1</sup> |
+| **[Logging through Azure Event Hub](api-management-howto-log-event-hubs.md)** | Custom scenarios | Seconds | User managed | Custom | Custom | Managed<sup>1</sup>, Self-hosted<sup>1</sup>, Azure Arc<sup>1</sup> |
+| **[OpenTelemetry](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md#introduction-to-opentelemetry)** | Monitoring | Minutes | User managed | 100% | Metrics | Self-hosted<sup>2</sup> |
-All the tools mentioned above are supported by the managed gateway in the cloud. The [self-hosted gateway](self-hosted-gateway-overview.md) currently does not send diagnostic logs to Azure Monitor. However, it is possible to configure and persist logs locally where the self-hosted gateway is deployed. For more information, please see [configuring cloud metrics and logs for self-hosted gateway](how-to-configure-cloud-metrics-logs.md) and [configuring local metrics and logs for self-hosted gateway](how-to-configure-local-metrics-logs.md).
+*1. Optional, depending on the configuration of feature in Azure API Management*
+
+*2. Optional, depending on the configuration of the gateway*
+
+*3. The [self-hosted gateway](self-hosted-gateway-overview.md) currently does not send diagnostic logs to Azure Monitor. However, it is possible to configure and persist logs locally where the self-hosted gateway is deployed. For more information, please see [configuring local metrics and logs for self-hosted gateway](how-to-configure-local-metrics-logs.md)*
## Next Steps * [Follow the tutorials to learn more about API Management](import-and-publish.md)
+- To learn more about the self-hosted gateway, see [Self-hosted gateway overview](self-hosted-gateway-overview.md).
app-service Troubleshoot Domain Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-domain-ssl-certificates.md
This article lists common problems that you might encounter when you configure a domain or TLS/SSL certificate for your web apps in Azure App Service. It also describes possible causes and solutions for these problems.
-If you need more help at any point in this article, you can contact the Azure experts on [the MSDN and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, you can file an Azure support incident. Go to the [Azure Support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
+If you need more help at any point in this article, you can contact the Azure experts on [Microsoft Q & A and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, you can file an Azure support incident. Go to the [Azure Support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
application-gateway Application Gateway Key Vault Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-key-vault-common-errors.md
# Common key vault errors in Azure Application Gateway
+Application Gateway enables customers to securely store TLS certificates in Azure Key Vault. When using a Key Vault resource, it is important that the gateway always has access to the linked key vault. If your Application Gateway is unable to fetch the certificate, the associated HTTPS listeners will be placed in a disabled state. [Learn more](../application-gateway/disabled-listeners.md).
+ This article helps you understand the details of key vault error codes you might encounter, including what is causing these errors. This article also contains steps to resolve such misconfigurations. > [!TIP]
application-gateway Configuration Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-listeners.md
Choose HTTP or HTTPS:
To configure TLS termination, a TLS/SSL certificate must be added to the listener. This allows the Application Gateway to decrypt incoming traffic and encrypt response traffic to the client. The certificate provided to the Application Gateway must be in Personal Information Exchange (PFX) format, which contains both the private and public keys.
+> [!NOTE]
+> When using a TLS certificate from Key Vault for a listener, you must ensure your Application Gateway always has access to that linked key vault resource and the certificate object within it. This enables seamless operations of TLS termination feature and maintains the overall health of your gateway resource. If an application gateway resource detects a misconfigured key vault, it automatically puts the associated HTTPS listener(s) in a disabled state. [Learn more](../application-gateway/disabled-listeners.md).
+ ## Supported certificates See [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md#certificates-supported-for-tls-termination)
application-gateway Disabled Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/disabled-listeners.md
+
+ Title: Identifying and fixing a disabled listener
+
+description: The article explains the details of a disabled listener and ways to resolve the problem.
+++ Last updated : 02/22/2022++++
+# Identifying and fixing a disabled listener on your gateway
+
+The SSL/TLS certificates for Azure Application GatewayΓÇÖs listeners can be referenced from a customerΓÇÖs Key Vault resource. Your application gateway must always have access to such linked key vault resource and its certificate object to ensure smooth operations of the TLS termination feature and the overall health of the gateway resource.
+
+It is important to consider any impact on your Application Gateway resource when making changes or revoking access to your Key Vault resource. In case your application gateway is unable to access the associated key vault or locate its certificate object, it will automatically put that listener in a disabled state. The action is triggered only in the case of configuration errors. Transient connectivity problems do not have any impact on the listeners.
+
+A disabled listener doesnΓÇÖt affect the traffic for other operational listeners on your Application Gateway. For example, the HTTP listeners or HTTPS listeners for which PFX certificate file is directly uploaded on Application Gateway resource will never go in a disabled state.
+
+[![An illustration showing affected listeners.](../application-gateway/media/disabled-listeners/affected-listener.png)](../application-gateway/media/disabled-listeners/affected-listener.png#lightbox)
+
+## Periodic check and its impact on listeners
+
+Understanding the behavior of the Application GatewayΓÇÖs periodic check and its potential impact on the state of a key vault-based listener could help you to preempt such occurrences or resolve them much faster.
+
+### How does the periodic check work?
+1. Application Gateway instances periodically poll the key vault resource to obtain a new certificate version.
+1. During this activity, if the instances instead detect a broken access to the key vault resource or a missing certificate object, the listener(s) associated with that key vault will go in a disabled state. The instances are updated with this disabled status of the listener(s) within 60 secs to provide a consistent data plane behavior.
+1. After the issue is resolved by the customer, the same four-hour periodic poll verifies the access to key vault certificate object and automatically re-enables listeners on all instances of that gateway.
+
+## Ways to identify a disabled listener
+
+1. The clients will observe the error "ERR_SSL_UNRECOGNIZED_NAME_ALERT" if any request is made to a disabled listener of your Application Gateway.
+
+[ ![Screenshot of client error will look.](../application-gateway/media/disabled-listeners/client-error.png) ](../application-gateway/media/disabled-listeners/client-error.png#lightbox)
+
+2. You can verify if the error is a result of a disabled listener on your gateway by checking your [Application GatewayΓÇÖs Resource Health page](../application-gateway/resource-health-overview.md). You will see an event as shown below.
+
+![A screenshot of user-driven resource health.](../application-gateway/media/disabled-listeners/resource-health-event.png)
+
+## Resolving Key Vault configuration errors
+You can narrow down to the exact cause and find steps to resolve the problem by visiting the Azure Advisor recommendation in your account.
+1. Sign-in to your Azure portal
+1. Select Advisor
+1. Select Operational Excellence category from the left menu.
+1. You will find a recommendation titled **Resolve Azure Key Vault issue for your Application Gateway**, if your gateway is experiencing this issue. Ensure the correct Subscription is selcted from the drop-down options above.
+1. Select it to view the error details and the associated key vault resource along with the [troubleshooting guide](../application-gateway/application-gateway-key-vault-common-errors.md) to fix your exact issue.
+
+> [!NOTE]
+> The disabled listener(s) are automatically enabled if Application Gateway resource detects the underlying problem is resolved. This check occurs every four-hour interval. You can expedite it by performing any minor change to Application Gateway (for HTTP Setting, Resource Tags, etc.) that will force a check against the Key Vault.
+
+## Next steps
+[Troubleshooting key vault errors in Azure Application Gateway](../application-gateway/application-gateway-key-vault-common-errors.md)
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
Under **Choose a certificate** select the certificate named in the previous step
## Investigating and resolving Key Vault errors
-Azure Application Gateway doesn't just poll for the renewed certificate version on Key Vault at every four-hour interval. It also logs any error and is integrated with Azure Advisor to surface any misconfiguration as a recommendation. The recommendation contains details about the problem and the associated Key Vault resource. You can use this information along with the [troubleshooting guide](../application-gateway/application-gateway-key-vault-common-errors.md) to quickly resolve such a configuration error.
-
-We strongly recommend that you [configure Advisor alerts](../advisor/advisor-alerts-portal.md) to stay updated when a problem is detected. To set an alert for this specific case, use **Resolve Azure Key Vault issue for your Application Gateway** as the recommendation type.
+> [!NOTE]
+> It is important to consider any impact on your Application Gateway resource when making changes or revoking access to your Key Vault resource. In case your application gateway is unable to access the associated key vault or locate the certificate object in it, it will automatically put that listener in a disabled state.
+>
+> You can identify this user-driven event by viewing the Resource Health for your Application Gateway. [Learn more](../application-gateway/disabled-listeners.md).
+
+Azure Application Gateway doesn't just poll for the renewed certificate version on Key Vault at every four-hour interval. It also logs any error and is integrated with Azure Advisor to surface any misconfiguration with a recommendation for its fix.
+
+1. Sign-in to your Azure portal
+1. Select Advisor
+1. Select Operational Excellence category from the left menu.
+1. You will find a recommendation titled **Resolve Azure Key Vault issue for your Application Gateway**, if your gateway is experiencing this issue. Ensure the correct Subscription is selected from the drop-down options above.
+1. Select it to view the error details, the associated key vault resource and the [troubleshooting guide](../application-gateway/application-gateway-key-vault-common-errors.md) to fix your exact issue.
+
+By identifying such an event through Azure Advisor or Resource Health, you can quickly resolve any configuration problems with your Key Vault. We strongly recommend you take advantage of [Azure Advisor](../advisor/advisor-alerts-portal.md) and [Resource Health](../service-health/resource-health-alert-monitor-guide.md) alerts to stay informed when a problem is detected.
+
+For Advisor alert, use "Resolve Azure Key Vault issue for your Application Gateway" in the recommendation type as shown below.</br>
+![Diagram that shows steps for Advisor alert.](media/key-vault-certs/advisor-alert.png)
+
+You can configure the Resource health alert as illustrated below.</br>
+![Diagram that shows steps for Resource health alert.](media/key-vault-certs/resource-health-alert.png)
## Next steps
applied-ai-services Concept Accuracy Confidence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-accuracy-confidence.md
> [!NOTE] >
-> * **Custom models do not provide accuracy scores during training**.
+> * **Custom neural models do not provide accuracy scores during training**.
> * Confidence scores for structured fields such as tables are currently unavailable. Custom models generate an estimated accuracy score when trained. Documents analyzed with a custom model produce a confidence score for extracted fields. In this article, you'll learn to interpret accuracy and confidence scores and best practices for using those scores to improve accuracy and confidence results.
The accuracy value range is a percentage between 0% (low) and 100% (high). The e
Form Recognizer analysis results return an estimated confidence for predicted words, key-value pairs, selection marks, regions, and signatures. Currently, not all document fields return a confidence score.
-Confidence indicates an estimated probability between 0 and 1 that the prediction is correct. For example, a confidence value of 0.95 (95%) indicates that the prediction is likely correct 19 out of 20 times. For scenarios where accuracy is critical, confidence may be used to determine whether to automatically accept the prediction or flag it for human review.
+Field confidence indicates an estimated probability between 0 and 1 that the prediction is correct. For example, a confidence value of 0.95 (95%) indicates that the prediction is likely correct 19 out of 20 times. For scenarios where accuracy is critical, confidence may be used to determine whether to automatically accept the prediction or flag it for human review.
+
+Confidence scores comprise of 2 components, the field level confidence score and the text extraction confidence score. In addition to the field confidence of position and span, the text extraction confidence in the ```pages``` section of the response is the model's confidence in the text extraction (OCR) process. The two confidence scores should be combined to generate a overall confidence score.
**Form Recognizer Studio** </br> **Analyzed invoice prebuilt-invoice model**
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 02/15/2022 Last updated : 02/28/2022 <!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
# What's new in Azure Form Recognizer Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* [🆕 **Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-strutured and **unstructured documents**. * [🆕 **W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios.
-* [🆕 **Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.
+* [🆕 **Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.
* [**General document**](concept-general-document.md) pre-trained model is now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents. * [**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices. * [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models.
Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/d
| Business card | Γ£ô | Γ£ô | | || | Custom |Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+#### Form Recognizer SDK beta preview release
+
+The beta.3/beta.4 version of the Azure Form Recognizer SDKs incorporates new features, minor feature updates and bug fixes.
+
+>[!NOTE]
+> The beta.3 (C#, JavaScript, Python) and beta.4 (Java) previews contain the same updates and fixes but the versioning is no longer in sync across all programming languages.
+
+This new release includes the following updates:
+
+* 🆕 [Custom Document models and modes](concept-custom.md):
+ * [Custom template](concept-custom-template.md) (formerly custom form)
+ * [Custom neural](concept-custom-neural.md).
+ * [Custom modelΓÇöbuild mode](concept-custom.md#build-mode).
+
+* 🆕 [W-2 prebuilt model](concept-w2.md) (prebuilt-tax.us.w2).
+
+* 🆕 [Read prebuilt model](concept-read.md) (prebuilt-read).
+
+* 🆕 [Invoice prebuilt model (Spanish)](concept-invoice.md#supported-languages-and-locales) (prebuilt-invoice).
+
+### [**C#**](#tab/csharp)
+
+**Version 4.0.0-beta.3 (2022-02-10)**
+
+##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta3-2022-02-10)
+
+##### [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.3)
+
+##### [**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer.documentanalysis?view=azure-dotnet-preview&preserve-view=true)
+
+### [**Java**](#tab/java)
+
+**Version 4.0.0-beta.4 (2022-02-10)**
+
+##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta4-2022-02-10)
+
+##### [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.4/jar)
+
+##### [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+
+### [**JavaScript**](#tab/javascript)
+
+**Version 4.0.0-beta.3 (2022-02-10)**
+
+##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md#400-beta3-2022-02-10)
+
+##### [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.3)
+
+##### [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
+
+### [Python](#tab/python)
+
+**Version 3.2.0b3 (2022-02-10)**
+
+##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md#320b3-2022-02-10)
+
+##### [**Package (PyPI)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b3/)
+
+##### [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
+++ ## November 2021 ### Form Recognizer v3.0 preview SDK release update (beta.2)
Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/d
### [**C#**](#tab/csharp)
+**Version 4.0.0-beta.2 (2021-11-09)**
+ | [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.2) | [**Changelog**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md) | [**API reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true) #### Bugs Fixed
The `BuildModelOperation` and `CopyModelOperation` now correctly populate the `P
### Form Recognizer v3.0 preview release (beta.1)
+**Version 4.0.0-beta.1 (2021-10-07)**
+ Form Recognizer v3.0 preview release introduces several new features and capabilities: * [**General document**](concept-general-document.md) model is a new API that uses a pre-trained model to extract text, tables, structure, key-value pairs, and named entities from forms and documents.
Get stared with the new [REST API](https://westus2.dev.cognitive.microsoft.com/d
* [Azure metrics explorer advanced features](../../azure-monitor/essentials/metrics-charts.md) are available on your Form Recognizer resource overview page in the Azure portal.
- ### Monitoring menu
+### Monitoring menu
- :::image type="content" source="media/portal-metrics.png" alt-text="Screenshot showing the monitoring menu in the Azure portal":::
- ### Charts
+### Charts
- :::image type="content" source="media/portal-metrics-charts.png" alt-text="Screenshot showing an example metric chart in the Azure portal.":::
-* **ID document** model update: given names including a suffix, with or without a period (full stop), process successfully:
+* **ID document** model update: given names including a suffix, with or without a period (full stop), process successfully:
|Input Text | Result with update | ||-|
Form Recognizer features are now supported by six feature containersΓÇö**Layout*
### Form Recognizer SDK v3.1.0 patched to v3.1.1 for C#, Java, and Python
-The patch addresses invoices that do not have subline item fields detected such as a `FormField` with `Text` but no `BoundingBox` or `Page` information.
+The patch addresses invoices that don't have subline item fields detected such as a `FormField` with `Text` but no `BoundingBox` or `Page` information.
### [**C#**](#tab/csharp)
Go to the [Form Recognizer Sample Tool](https://fott-2-1.azurewebsites.net/) and
### Layout adds table headers
-The updated Layout API table feature adds header recognition with column headers that can span multiple rows. Each table cell has an attribute that indicates whether it's part of a header or not. This can be used to identify which rows make up the table header.
+The updated Layout API table feature adds header recognition with column headers that can span multiple rows. Each table cell has an attribute that indicates whether it's part of a header or not. This update can be used to identify which rows make up the table header.
#### SDK updates
npm package version 3.1.0-beta.3
* Added support for a **[ReadingOrder](/javascript/api/@azure/ai-form-recognizer/formreadingorder?view=azure-node-latest&preserve-view=true to the URL)** type to the content recognition methods. This option enables you to control the algorithm that the service uses to determine how recognized lines of text should be ordered. You can specify which reading order algorithmΓÇö`basic` or `natural`ΓÇöshould be applied to order the extraction of text elements. If not specified, the default value is `basic`.
-* Split **[FormField](/javascript/api/@azure/ai-form-recognizer/formfield?view=azure-node-preview&preserve-view=true)** type into several different interfaces. This update should not cause any API compatibility issues except in certain edge cases (undefined valueType).
+* Split **[FormField](/javascript/api/@azure/ai-form-recognizer/formfield?view=azure-node-preview&preserve-view=true)** type into several different interfaces. This update shouldn't cause any API compatibility issues except in certain edge cases (undefined valueType).
* Migrated to the **2.1-preview.3** Form Recognizer service endpoint for all REST API calls.
pip package version 3.1.0b4
:::image type="content" source="./media/table-labeling.png" alt-text="Table labeling" lightbox="./media/table-labeling.png":::
- In addition to labeling tables, you can now label empty values and regions; if some documents in your training set do not have values for certain fields, you can label them so that your model will know to extract values properly from analyzed documents.
+ In addition to labeling tables, you can now label empty values and regions; if some documents in your training set don't have values for certain fields, you can label them so that your model will know to extract values properly from analyzed documents.
* **Support for 66 new languages** - The Layout API and Custom Models for Form Recognizer now support 73 languages.
pip package version 3.1.0b4
* **REST API reference is available** - View the [v2.1-preview.1 reference](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-1/operations/AnalyzeBusinessCardAsync) * **New languages supported In addition to English**, the following [languages](language-support.md) are now supported: for `Layout` and `Train Custom Model`: English (`en`), Chinese (Simplified) (`zh-Hans`), Dutch (`nl`), French (`fr`), German (`de`), Italian (`it`), Portuguese (`pt`) and Spanish (`es`). * **Checkbox / Selection Mark detection** ΓÇô Form Recognizer supports detection and extraction of selection marks such as check boxes and radio buttons. Selection Marks are extracted in `Layout` and you can now also label and train in `Train Custom Model` - _Train with Labels_ to extract key-value pairs for selection marks.
-* **Model Compose** - allows multiple models to be composed and called with a single model ID. When a you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_.
+* **Model Compose** - allows multiple models to be composed and called with a single model ID. When you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_.
* **Model name** - add a friendly name to your custom models for easier management and tracking. * **[New pre-built model for Business Cards](./concept-business-card.md)** for extracting common fields in English, language business cards. * **[New locales for pre-built Receipts](./concept-receipt.md)** in addition to EN-US, support is now available for EN-AU, EN-CA, EN-GB, EN-IN
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-role-based-access-control.md
Update Management can be used to assess and schedule update deployments to machi
|||| |Automation account |[Custom Azure Automation Contributor role](#custom-azure-automation-contributor-role) |Automation account | |Automation account |Virtual Machine Contributor |Resource Group for the account |
-|Log Analytics workspace Log Analytics Contributor|Log Analytics workspace |
+|Log Analytics workspace | Log Analytics Contributor|Log Analytics workspace |
|Log Analytics workspace |Log Analytics Reader|Subscription| |Solution |Log Analytics Contributor |Solution| |Virtual Machine |Virtual Machine Contributor |Virtual Machine |
automation Automation Security Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-guidelines.md
Title: Azure Automation security guidelines
-description: This article helps you with the guidelines that Azure Automation offers to ensure data privacy and data security.
+ Title: Azure Automation security guidelines, security best practices Automation.
+description: This article helps you with the guidelines that Azure Automation offers to ensure a secured configuration of Automation account, Hybrid Runbook worker role, authentication certificate and identities, network isolation and policies.
Last updated 02/16/2022
availability-zones Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/cross-region-replication-azure.md
Regions are paired for cross-region replication based on proximity and other fac
| Canada |Canada Central |Canada East | | China |China North |China East| | China |China North 2 |China East 2|
-| China |China North 3 |China East 3|
+| China |China North 3 |China East 3\* |
| Europe |North Europe (Ireland) |West Europe (Netherlands) | | France |France Central|France South\*| | Germany |Germany West Central |Germany North\* |
azure-arc Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connectivity.md
Some Azure-attached services are only available when they can be directly reache
|**Automatic upgrades and patching**|Supported<br/>The data controller must either have direct access to the Microsoft Container Registry (MCR) or the container images need to be pulled from MCR and pushed to a local, private container registry that the data controller has access to.|Supported| |**Automatic backup and restore**|Supported<br/>Automatic local backup and restore.|Supported<br/>In addition to automated local backup and restore, you can _optionally_ send backups to Azure blob storage for long-term, off-site retention.| |**Monitoring**|Supported<br/>Local monitoring using Grafana and Kibana dashboards.|Supported<br/>In addition to local monitoring dashboards, you can _optionally_ send monitoring data and logs to Azure Monitor for at-scale monitoring of multiple sites in one place. |
-|**Authentication**|Use local username/password for data controller and dashboard authentication. Use SQL and Postgres logins or Active Directory (AD is not currently supported) for connectivity to database instances. Use K8s authentication providers for authentication to the Kubernetes API.|In addition to or instead of the authentication methods for the indirectly connected mode, you can _optionally_ use Azure Active Directory.|
+|**Authentication**|Use local username/password for data controller and dashboard authentication. Use SQL and Postgres logins or Active Directory (AD is not currently supported) for connectivity to database instances. Use Kubernetes authentication providers for authentication to the Kubernetes API.|In addition to or instead of the authentication methods for the indirectly connected mode, you can _optionally_ use Azure Active Directory.|
|**Role-based access control (RBAC)**|Use Kubernetes RBAC on Kubernetes API. Use SQL and Postgres RBAC for database instances.|You can use Azure Active Directory and Azure RBAC. **Pending availability in directly connected mode**| ## Connectivity requirements
azure-arc Create Complete Managed Instance Directly Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-directly-connected.md
Title: Quickstart - Deploy Azure Arc-enable data services - directly connected mode - Azure portal
+ Title: Quickstart - Deploy Azure Arc-enabled data services - directly connected mode - Azure portal
description: Demonstrates how to deploy Azure Arc-enabled data services from beginning, including a Kubernetes cluster. Finishes with an instance of Azure SQL Managed Instance.
Last updated 12/09/2021
-# Quickstart: Deploy Azure Arc-enable data services - directly connected mode - Azure portal
+# Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal
This article demonstrates how to deploy Azure Arc-enabled data services in directly connected mode from the Azure portal.
-To deploy in indirectly connected mode, see [Quickstart: Deploy Azure Arc-enable data services - indirectly connected mode - Azure CLI](create-complete-managed-instance-indirectly-connected.md).
+To deploy in indirectly connected mode, see [Quickstart: Deploy Azure Arc-enabled data services - indirectly connected mode - Azure CLI](create-complete-managed-instance-indirectly-connected.md).
When you complete the steps in this article, you will have:
azure-arc Create Complete Managed Instance Indirectly Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-indirectly-connected.md
Title: Quickstart - Deploy Azure Arc-enable data services - indirectly connected mode - Azure CLI
+ Title: Quickstart - Deploy Azure Arc-enabled data services - indirectly connected mode - Azure CLI
description: Demonstrates how to deploy Azure Arc-enabled data services in indirectly connected mode from beginning, including a Kubernetes cluster. Uses Azure CLI. Finishes with an instance of Azure SQL Managed Instance.
Last updated 12/09/2021
-# Quickstart: Deploy Azure Arc-enable data services - indirectly connected mode - Azure CLI
+# Quickstart: Deploy Azure Arc-enabled data services - indirectly connected mode - Azure CLI
This article demonstrates how to deploy Azure Arc-enabled data services in indirectly connected mode from with the Azure CLI.
-To deploy in directly connected mode, see [Quickstart: Deploy Azure Arc-enable data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md).
+To deploy in directly connected mode, see [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md).
When you complete the steps in this article, you will have:
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md
You can verify the status of the deployment of Azure Arc-enabled data services e
#### Check status from Azure portal 1. Log in to the Azure portal and browse to the resource group where the Kubernetes connected cluster resource is located.
-1. Select the Azure Arc-enabled kubernetes cluster (Type = "Kubernetes - Azure Arc") where the extension was deployed.
+1. Select the Azure Arc-enabled Kubernetes cluster (Type = "Kubernetes - Azure Arc") where the extension was deployed.
1. In the navigation on the left side, under **Settings**, select **Extensions**. 1. The portal shows the extension that was created earlier in an installed state.
azure-arc Create Data Controller Direct Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-prerequisites.md
At a high level, the prerequisites for creating Azure Arc data controller in **d
1. Have access to your Kubernetes cluster. If you do not have a Kubernetes cluster, you can create a test/demonstration cluster on Azure Kubernetes Service (AKS). 1. Connect Kubernetes cluster to Azure using Azure Arc-enabled Kubernetes.
-Follow the instructions at [Quickstart: Deploy Azure Arc-enable data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md)
+Follow the instructions at [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md)
## Connect Kubernetes cluster to Azure using Azure Arc-enabled Kubernetes
azure-arc Create Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-cli.md
The following sections provide instructions for specific types of Kubernetes pla
- [Google Cloud Kubernetes Engine Service (GKE)](#create-on-google-cloud-kubernetes-engine-service-gke) > [!TIP]
-> If you have no Kubernetes cluster, you can create one on Azure. Follow the instructions at [Quickstart: Deploy Azure Arc-enable data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md) to walk through the entire process.
+> If you have no Kubernetes cluster, you can create one on Azure. Follow the instructions at [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md) to walk through the entire process.
> > Then follow the instructions under [Create on Azure Kubernetes Service (AKS)](#create-on-azure-kubernetes-service-aks).
azure-arc Create Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance.md
To create a SQL Managed Instance, use `az sql mi-arc create`. See the following
> [!NOTE] > Starting with the February release, a ReadWriteMany (RWX) capable storage class needs to be specified for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)
-If no storage class is specified for backups, the default storage class in kubernetes is used and if this is not RWX capable, the Arc SQL Managed Instance installation may not succeed.
+If no storage class is specified for backups, the default storage class in Kubernetes is used and if this is not RWX capable, the Arc SQL Managed Instance installation may not succeed.
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md
In order to experience Azure Arc-enabled data services, you will need to complet
1. [Install client tools](install-client-tools.md). 1. Access a Kubernetes cluster.
- For demonstration, testing, and validation purposes, you can use an Azure Kubernetes Service cluster. To create a cluster, follow the instructions at [Quickstart: Deploy Azure Arc-enable data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md) to walk through the entire process.
+ For demonstration, testing, and validation purposes, you can use an Azure Kubernetes Service cluster. To create a cluster, follow the instructions at [Quickstart: Deploy Azure Arc-enabled data services - directly connected mode - Azure portal](create-complete-managed-instance-directly-connected.md) to walk through the entire process.
1. [Create Azure Arc data controller in direct connectivity mode (prerequisites)](create-data-controller-direct-prerequisites.md).
azure-arc Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/point-in-time-restore.md
You can also restore a database to a point in time from Azure Data Studio as fol
### Monitor progress
-When a restore is initiated, a task is created in the kubernetes cluster that executes the actual restore operations of full, differential, and log backups. The progress of this activity can be monitored from your kubernetes cluster as follows:
+When a restore is initiated, a task is created in the Kubernetes cluster that executes the actual restore operations of full, differential, and log backups. The progress of this activity can be monitored from your Kubernetes cluster as follows:
```console kubectl get sqlmirestoretask -n <namespace>
azure-arc Service Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/service-tiers.md
Previously updated : 07/30/2021 Last updated : 03/01/2022
Following is a description of the various capabilities available from Azure Arc-
Area | Business critical (preview)* | General purpose -|--|
-Feature set | Same as SQL Server Enterprise Edition | Same as SQL Server Standard Edition
+SQL Feature set | Same as SQL Server Enterprise Edition | Same as SQL Server Standard Edition
CPU limit/instance | Unlimited | 24 cores Memory limit/instance | Unlimited | 128 GB
-High availability | Availability group | Single instance w/ Kubernetes redeploy + shared storage.
+Scale up/down | Available | Available
+Monitoring | Built-in available locally, and optionally export to Azure Monitor | Built-in available locally, and optionally export to Azure Log Analytics
+Logging | Built-in available locally, and optionally export to Azure Log Analytics | Built-in available locally, and optionally export to Azure Monitor
+Point in time Restore | Built-in | Built-in
+High availability | Contained Availability groups over kubernetes redeployment | Single instance w/ Kubernetes redeploy + shared storage.
Read scale out | Availability group | None
+Disaster Recovery | Available via Failover Groups | Available via Failover Groups
AHB exchange rates for IP component of price | 1:1 Enterprise Edition <br> 4:1 Standard Edition | 1:4 Enterprise Edition​ <br> 1:1 Standard Edition Dev/Test pricing | No cost | No cost
azure-arc View Arc Data Services Inventory In Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/view-arc-data-services-inventory-in-azure-portal.md
# Inventory of Arc enabled data services
+You can view your Azure Arc-enabled data services in the Azure portal or in your Kubernetes cluster.
## View resources in Azure portal
-After you upload your [metrics, logs](upload-metrics-and-logs-to-azure-monitor.md), or [usage](view-billing-data-in-azure.md), you can view your Azure Arc-enabled SQL managed instances or Azure Arc-enabled Postgres Hyperscale server groups in Azure portal. To view your resource in the [portal](https://portal.azure.com) follow these steps:
+After you upload your [metrics, logs](upload-metrics-and-logs-to-azure-monitor.md), or [usage](view-billing-data-in-azure.md), you can view your Azure Arc-enabled SQL managed instances or Azure Arc-enabled Postgres Hyperscale server groups in the Azure portal. To view your resource in the [Azure portal](https://portal.azure.com), follow these steps:
1. Go to **All services**. 1. Search for your database instance type.
After you upload your [metrics, logs](upload-metrics-and-logs-to-azure-monitor.m
1. In the left menu, select the instance type. 1. View your instances in the same view as your other Azure SQL or Azure PostgreSQL Hyperscale resources (use filters for a granular view).
-## View resources in your kubernetes cluster
+## View resources in your Kubernetes cluster
If the Azure Arc data controller is deployed in **indirect** connectivity mode, you can run the below command to get a list of all the Azure Arc SQL managed instances:+ ``` az sql mi-arc list --k8s-namespace <namespace> --use-k8s #Example
az sql mi-arc list --k8s-namespace arc --use-k8s
``` If the Azure Arc data controller is deployed in **direct** connectivity mode, you can run the below command to get a list of all the Azure Arc SQL managed instances:+ ``` az sql mi-arc list --resource-group <resourcegroup> #Example
azure-arc View Data Controller In Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/view-data-controller-in-azure-portal.md
Last updated 11/03/2021
-# View Azure Arc data controller resource in Azure portal
+# View Azure Arc data controller resource in the Azure portal
-To view the Azure Arc data controller in your Azure portal, one of usage/metrics/logs data from your kubernetes cluster must be exported and uploaded to Azure.
+To view the Azure Arc data controller in the Azure portal, you must export at least one type of data (usage data, metrics, or logs) from your Kubernetes cluster and upload it to Azure.
## Direct connected mode
-If the Azure Arc data controller is deployed in **direct** connected mode, usage data is automatically uploaded to Azure, and the kubernetes resources are projected into Azure.
-## Indirect connected mode
-In the **indirect** connected mode, you must export and upload at least one of usage data, metrics, or logs to Azure as described in [Upload usage data, metrics, and logs to Azure](upload-metrics-and-logs-to-azure-monitor.md). This action creates the appropriate resources in Azure.
+If the Azure Arc data controller is deployed in **direct** connected mode, usage data is automatically uploaded to Azure, and the Kubernetes resources are projected into Azure.
-## Azure Portal
+## Indirect connected mode
-After you complete your first [metrics or logs upload to Azure](upload-metrics-and-logs-to-azure-monitor.md) or [uploaded usage](view-billing-data-in-azure.md), you can see the Azure Arc data controller and any Azure Arc-enabled SQL managed instances or Azure Arc-enabled Postgres Hyperscale server resources in the Azure portal.
+In the **indirect** connected mode, you must export and upload at least one type of data (usage data, metrics, or logs) to Azure. For more information on this process, see [Upload usage data, metrics, and logs to Azure](upload-metrics-and-logs-to-azure-monitor.md). This action creates the appropriate resources in Azure.
-Open the Azure portal using the URL: [https://portal.azure.com](https://portal.azure.com).
+## Azure portal
-Search for your data controller by name in the search bar and click on it.
+After you complete your first [metrics or logs upload to Azure](upload-metrics-and-logs-to-azure-monitor.md) or [usage data upload](view-billing-data-in-azure.md), you can see the Azure Arc data controller and any Azure Arc-enabled SQL managed instances or Azure Arc-enabled Postgres Hyperscale server resources in the [Azure portal](https://portal.azure.com).
+To find your data controller, search for it by name in the search bar and then select it.
azure-arc Conceptual Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-agent-overview.md
The following steps are involved in connecting a Kubernetes cluster to Azure Arc
| `deployment.apps/extension-manager` | Installs and manages lifecycle of extension helm charts | | `deployment.apps/kube-aad-proxy` | Used for authentication of requests sent to the cluster using Cluster Connect | | `deployment.apps/clusterconnect-agent` | Reverse proxy agent that enables Cluster Connect feature to provide access to `apiserver` of cluster. Optional component deployed only if `cluster-connect` feature is enabled on the cluster |
- | `deployment.apps/guard` | Authentication and authorization webhook server used for AAD RBAC feature. Optional component deployed only if `azure-rbac` feature is enabled on the cluster |
+ | `deployment.apps/guard` | Authentication and authorization webhook server used for Azure Active Directory (Azure AD) RBAC. Optional component deployed only if `azure-rbac` feature is enabled on the cluster |
1. Once all the Azure Arc-enabled Kubernetes agent pods are in `Running` state, verify that your cluster connected to Azure Arc. You should see: * An Azure Arc-enabled Kubernetes resource in [Azure Resource Manager](../../azure-resource-manager/management/overview.md). Azure tracks this resource as a projection of the customer-managed Kubernetes cluster, not the actual Kubernetes cluster itself.
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
az connectedk8s connect -n AzureArcTest -g AzureArcTest
Ensure that you have the latest helm version installed before proceeding. This operation might take a while...
-Please check if the azure-arc namespace was deployed and run 'kubectl get pods -n azure-arc' to check if all the pods are in running state. A possible cause for pods stuck in pending state could be insufficientresources on the kubernetes cluster to onboard to arc.
+Please check if the azure-arc namespace was deployed and run 'kubectl get pods -n azure-arc' to check if all the pods are in running state. A possible cause for pods stuck in pending state could be insufficientresources on the Kubernetes cluster to onboard to arc.
ValidationError: Unable to install helm release: Error: customresourcedefinitions.apiextensions.k8s.io "connectedclusters.arc.azure.com" not found ```
Some other aspects to consider:
With these actions accomplished you can either [re-create a flux configuration](./tutorial-use-gitops-flux2.md) which will install the flux extension automatically or you can re-install the flux extension manually.
-### Flux v2 - Installing the `microsoft.flux` extension in a cluster with AAD Pod Identity enabled
+### Flux v2 - Installing the `microsoft.flux` extension in a cluster with Azure AD Pod Identity enabled
-If you attempt to install the Flux extension in a cluster that has AAD Pod Identity enabled, an error may occur in the extension-agent pod.
+If you attempt to install the Flux extension in a cluster that has Azure Active Directory (Azure AD) Pod Identity enabled, an error may occur in the extension-agent pod.
```console {"Message":"2021/12/02 10:24:56 Error: in getting auth header : error {adal: Refresh request failed. Status Code = '404'. Response body: no azure identity found for request clientID <REDACTED>\n}","LogType":"ConfigAgentTrace","LogLevel":"Information","Environment":"prod","Role":"ClusterConfigAgent","Location":"westeurope","ArmId":"/subscriptions/<REDACTED>/resourceGroups/<REDACTED>/providers/Microsoft.Kubernetes/managedclusters/<REDACTED>","CorrelationId":"","AgentName":"FluxConfigAgent","AgentVersion":"0.4.2","AgentTimestamp":"2021/12/02 10:24:56"}
The extension status also returns as "Failed".
The issue is that the extension-agent pod is trying to get its token from IMDS on the cluster in order to talk to the extension service in Azure; however, this token request is being intercepted by pod identity ([details here](../../aks/use-azure-ad-pod-identity.md)).
-The workaround is to create an `AzurePodIdentityException` that will tell AAD Pod Identity to ignore the token requests from flux-extension pods.
+The workaround is to create an `AzurePodIdentityException` that will tell Azure AD Pod Identity to ignore the token requests from flux-extension pods.
```console apiVersion: aadpodidentity.k8s.io/v1
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
* An Azure Arc-enabled Kubernetes connected cluster that's up and running.
- [Learn how to Azure Arc-enable a Kubernetes cluster](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server).
+ [Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server).
* Read and write permissions on the `Microsoft.Kubernetes/connectedClusters` resource type. ### For Azure Kubernetes Service clusters
Arguments
--timeout : Maximum time to reconcile the source before timing out. Auth Arguments
- --local-auth-ref --local-ref : Local reference to a kubernetes secret in the configuration
+ --local-auth-ref --local-ref : Local reference to a Kubernetes secret in the configuration
namespace to use for communication to the source. Bucket Auth Arguments
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 02/28/2022 Last updated : 03/02/2022
This page is updated monthly, so revisit it regularly. If you're looking for ite
## Version 1.15 - February 2022
+### Known issues
+- The "Arc" proxy bypass feature on Linux includes some endpoints that belong to Azure Active Directory. As a result, if you only specify the "Arc" bypass rule, traffic destined for Azure Active Directory endpoints will not use the proxy server as expected. This issue will be fixed in an upcoming release.
+ ### New features - Network check improvements during onboarding:
This page is updated monthly, so revisit it regularly. If you're looking for ite
### Fixed - Improved reliability when disconnecting the agent from Azure
+- Improved reliability when installing and uninstalling the agent on Active Directory Domain Controllers
- Extended the device login timeout to 5 minutes - Removed resource constraints for Azure Monitor Agent to support high throughput scenarios
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
You do not need to restart any services when reconfiguring the proxy settings wi
### Proxy bypass for private endpoints
-Starting with agent version 1.15, you can also specify services which should **not** use the specified proxy server. This can help with split-network designs and private endpoint scenarios where you want Azure Active Directory and Azure Resource Manager traffic to go through your proxy server to public endpoints but want Arc traffic to skip the proxy and communicate with a private IP address on your network.
+Starting with agent version 1.15, you can also specify services which should **not** use the specified proxy server. This can help with split-network designs and private endpoint scenarios where you want Azure Active Directory (Azure AD) and Azure Resource Manager traffic to go through your proxy server to public endpoints but want Azure Arc traffic to skip the proxy and communicate with a private IP address on your network.
The proxy bypass feature does not require you to enter specific URLs to bypass. Instead, you provide the name of the service(s) that should not use the proxy server.
The proxy bypass feature does not require you to enter specific URLs to bypass.
| ARM | `management.azure.com` | | Arc | `his.arc.azure.com`, `guestconfiguration.azure.com`, `guestnotificationservice.azure.com`, `servicebus.windows.net` |
-To send Azure Active Directory and Azure Resource Manager traffic through a proxy server but skip the proxy for Arc traffic, run the following command:
+To send Azure Active Directory and Azure Resource Manager traffic through a proxy server but skip the proxy for Azure Arc traffic, run the following command:
```bash azcmagent config set proxy.url "http://ProxyServerFQDN:port"
azure-arc Manage Access To Arc Vmware Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/manage-access-to-arc-vmware-resources.md
We recommend assigning this role at the subscription or resource group you want
6. Select the custom role you want to assign (one of **Azure Arc VMware Administrator**, **Azure Arc VMware Private Cloud User**, or **Azure Arc VMware VM Contributor**).
-7. Search for Azure Active Directory user or group that you want assign this role to.
+7. Search for the Azure Active Directory (Azure AD) user or group to which you want to assign this role.
-8. Click on the AAD user or group name to select. Repeat this for each user/group you want to provide this permission.
+8. Select the Azure AD user or group name. Repeat this for each user or group to which you want to grant this permission.
9. Repeat the above steps for each scope and role.
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
Title: Azure Functions deployment slots description: Learn to create and use deployment slots with Azure Functions-+ Previously updated : 04/15/2020- Last updated : 03/02/2022+ # Azure Functions deployment slots
Use the following steps to change a slot's App Service plan:
1. Select **OK**.
-## Limitations
+## Considerations
-Azure Functions deployment slots have the following limitations:
+Azure Functions deployment slots have the following considerations:
-- The number of slots available to an app depends on the plan. The Consumption plan is only allowed one deployment slot. Additional slots are available for apps running under the App Service plan.
+- The number of slots available to an app depends on the plan. The Consumption plan is only allowed one deployment slot. Additional slots are available for apps running under other plans. For details, see [Service limits](functions-scale.md#service-limits).
- Swapping a slot resets keys for apps that have an `AzureWebJobsSecretStorageType` app setting equal to `files`.-- When slots are enabled, your Functions app is set to read-only mode in the portal.-
-## Support levels
-
-There are two levels of support for deployment slots:
--- **General availability (GA)**: Fully supported and approved for production use.-- **Preview**: Not yet supported, but is expected to reach GA status in the future.-
-| OS/Hosting plan | Level of support |
-| - | -- |
-| Windows Consumption | General availability |
-| Windows Premium | General availability |
-| Windows Dedicated | General availability |
-| Linux Consumption | General availability |
-| Linux Premium | General availability |
-| Linux Dedicated | General availability |
+- When slots are enabled, your function app is set to read-only mode in the portal.
## Next steps
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
The following Azure Migrate **features are not currently available** in Azure Go
- Containerizing ASP.NET apps and deploying them on Windows containers on App Service. - You can only create assessments for Azure Government as target regions and using Azure Government offers.
-For more information, see [Azure Migrate support matrix](../migrate/migrate-support-matrix.md#supported-geographies-azure-government). For a list of Azure Government URLs needed by the Azure Migrate appliance when connecting to the internet, see [Azure Migrate appliance URL access](../migrate/migrate-appliance.md#url-access).
+For more information, see [Azure Migrate support matrix](../migrate/migrate-support-matrix.md#azure-government). For a list of Azure Government URLs needed by the Azure Migrate appliance when connecting to the internet, see [Azure Migrate appliance URL access](../migrate/migrate-appliance.md#url-access).
## Networking
azure-government Documentation Government Overview Jps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-jps.md
Title: Azure Government JPS Overview | Microsoft Docs
-description: Learn about features and guidance on developing applications for Justice and Public Safety (JPS) in Azure Government.
-
-cloud: gov
-
+ Title: Azure support for public safety and justice
+description: Guidance on using Azure cloud services for public safety and justice workloads.
- Previously updated : 11/14/2016-++
+recommendations: false
Last updated : 03/01/2022
-# Justice and Public Safety (JPS) in Azure Government
+
+# Public safety and justice in Azure Government
+ ## Overview
-Justice and Public Safety (JPS) agencies are under mounting pressure to keep communities safe, reduce crime, and improve responsiveness. From intelligent policing awareness systems, to body camera systems across the country/region, to day-to-day mobile police collaboration, cloud computing is transforming the way law enforcement agencies approach their work.
-When they are properly planned and secured, cloud services can deliver powerful new capabilities for JPS. These capabilities include digital evidence management, data analysis, and real-time decision support--with solutions delivered on the latest mobile devices. However, not all cloud providers are equal. As law enforcement agencies embrace the cloud, they need a cloud service provider they can trust. The core of the law enforcement mission demands partners who are committed to meeting a full range of security, compliance, and operational needs.
+Public safety and justice agencies are under mounting pressure to keep communities safe, reduce crime, and improve responsiveness. Cloud computing is transforming the way law enforcement agencies approach their work. It is helping with intelligent policing awareness systems, body camera systems across the country/region, and day-to-day mobile police collaboration.
+
+When they're properly planned and secured, cloud services can deliver powerful new capabilities for public safety and justice agencies. These capabilities include digital evidence management, data analysis, and real-time decision support. Solutions can be delivered on the latest mobile devices. However, not all cloud providers are equal. As law enforcement agencies embrace the cloud, they need a cloud service provider they can trust. The core of the law enforcement mission demands partners who are committed to meeting a full range of security, compliance, and operational needs.
+
+From devices to the cloud, Microsoft puts privacy and information security first, while increasing productivity for officers in the field and throughout the department. Public safety and justice agencies can combine highly secure mobile devices with "anytime-anywhere" access to the cloud. In doing so, they can contribute to ongoing investigations, analyze data, manage evidence, and help protect citizens from threats.
+
+Microsoft treats Criminal Justice Information Services (CJIS) compliance as a commitment, not a check box. At Microsoft, we're committed to providing solutions that meet the applicable CJIS security controls, today and in the future. Moreover, we extend our commitment to public safety and justice through:
+
+- [Microsoft Digital Crimes Unit](https://news.microsoft.com/on-the-issues/2021/04/15/how-microsofts-digital-crimes-unit-fights-cybercrime/)
+- [Microsoft Cyber Defense Operations Center](https://www.microsoft.com/msrc/cdoc)
+- [Microsoft solutions for public safety and justice](https://www.microsoft.com/industry/government/public-safety-and-justice)
+
+## Criminal Justice Information Services (CJIS)
+
+The [Criminal Justice Information Services](https://www.fbi.gov/services/cjis) (CJIS) Division of the US Federal Bureau of Investigation (FBI) gives state, local, and federal law enforcement and criminal justice agencies access to criminal justice information (CJI), for example, fingerprint records and criminal histories. Law enforcement and other government agencies in the United States must ensure that their use of cloud services for the transmission, storage, or processing of CJI complies with the [CJIS Security Policy](https://www.fbi.gov/services/cjis/cjis-security-policy-resource-center/view), which establishes minimum security requirements and controls to safeguard CJI.
+
+The CJIS Security Policy integrates presidential and FBI directives, federal laws, and the criminal justice community's Advisory Policy Board decisions, along with guidance from the National Institute of Standards and Technology (NIST). The CJIS Security Policy is updated periodically to reflect evolving security requirements.
+
+The CJIS Security Policy defines 13 areas that private contractors such as cloud service providers must evaluate to determine if their use of cloud services can be consistent with CJIS requirements. These areas correspond closely to control families in [NIST SP 800-53](https://csrc.nist.gov/Projects/risk-management/sp800-53-controls/release-search#!/800-53), which is also the basis for the US Federal Risk and Authorization Management Program (FedRAMP). The FBI CJIS Information Security Officer (ISO) Program Office has published a [security control mapping of CJIS Security Policy requirements to NIST SP 800-53](https://www.fbi.gov/file-repository/csp-v5_5-to-nist-controls-mapping-1.pdf/view). The corresponding NIST SP 800-53 controls are listed for each CJIS Security Policy section.
+
+All private contractors who process CJI must sign the CJIS Security Addendum, a uniform agreement approved by the US Attorney General that helps ensure the security and confidentiality of CJI required by the Security Policy. It commits the contractor to maintaining a security program consistent with federal and state laws, regulations, and standards. The addendum also limits the use of CJI to the purposes for which a government agency provided it.
+
+### Azure and CJIS Security Policy
+
+Microsoft will sign the CJIS Security Addendum in states with CJIS Information Agreements. These agreements tell state law enforcement authorities responsible for compliance with CJIS Security Policy how Microsoft's cloud security controls help protect the full lifecycle of data and ensure appropriate background screening of operating personnel with potential access to CJI.
+
+Microsoft has agreements signed with nearly all 50 states and the District of Columbia except for the following states: Delaware, Louisiana, Maryland, New Mexico, Ohio, and South Dakota. Microsoft continues to work with state governments to enter into CJIS Information Agreements.
+
+Microsoft's commitment to meeting the applicable CJIS regulatory controls help criminal justice organizations be compliant with the CJIS Security Policy when implementing cloud-based solutions. Microsoft can accommodate customers subject to the CJIS Security Policy requirements in:
+
+- [Azure Government](./documentation-government-welcome.md)
+- [Dynamics 365 US Government](/power-platform/admin/microsoft-dynamics-365-government#certifications-and-accreditations)
+- [Office 365 GCC](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc#us-government-community-compliance)
+
+Microsoft has assessed the operational policies and procedures of Microsoft Azure Government, Dynamics 365 US Government, and Office 365 GCC, and will attest to their ability in the applicable services agreements to meet FBI requirements. For more information about Azure support for CJIS, see [Azure CJIS compliance offering](/azure/compliance/offerings/offering-cjis).
+
+The remainder of this article discusses technologies that you can use to safeguard CJI stored or processed in Azure cloud services. These technologies can help you establish sole control over CJI that you're responsible for.
+
+> [!NOTE]
+> You are wholly responsible for ensuring your own compliance with all applicable laws and regulations. Information provided in this article does not constitute legal advice, and you should consult your legal advisor for any questions regarding regulatory compliance.
+
+## Location of customer data
+
+Microsoft provides [strong customer commitments](https://www.microsoft.com/trust-center/privacy/data-location) regarding [cloud services data residency and transfer policies](https://azure.microsoft.com/global-infrastructure/data-residency/). Most Azure services are deployed regionally and enable you to specify the region into which the service will be deployed, for example, United States. This commitment helps ensure that [customer data](https://www.microsoft.com/trust-center/privacy/customer-data-definitions) stored in a US region will remain in the United States and won't be moved to another region outside the United States.
+
+## Tenant separation
+
+Azure is a hyperscale public multi-tenant cloud services platform that provides you with access to a feature-rich environment incorporating the latest cloud innovations such as artificial intelligence, machine learning, IoT services, big-data analytics, intelligent edge, and many more to help you increase efficiency and unlock insights into your operations and performance.
+
+A multi-tenant cloud platform implies that multiple customer applications and data are stored on the same physical hardware. Azure uses logical isolation to segregate your applications and data from other customers. This approach provides the scale and economic benefits of multi-tenant cloud services while rigorously helping prevent other customers from accessing your data or applications.
+
+Azure addresses the perceived risk of resource sharing by providing a trustworthy foundation for assuring multi-tenant, cryptographically certain, logically isolated cloud services using a common set of principles:
+
+- User access controls with authentication and identity separation
+- Compute isolation for processing
+- Networking isolation including data encryption in transit
+- Storage isolation with data encryption at rest
+- Security assurance processes embedded in service design to correctly develop logically isolated services
+
+Logical compute isolation is implemented via Hypervisor isolation, Drawbridge isolation, and User context-based isolation. Aside from logical compute isolation, Azure also provides you with physical compute isolation if you require dedicated physical servers for your workloads. For example, if you desire physical compute isolation, you can use Azure Dedicated Host or Isolated Virtual Machines, which are deployed on server hardware dedicated to a single customer. For more information, see [Azure guidance for secure isolation](./azure-secure-isolation-guidance.md).
+
+## Data encryption
+
+Azure has extensive support to safeguard your data using [data encryption](../security/fundamentals/encryption-overview.md), including various encryption models:
+
+- Server-side encryption that uses service-managed keys, customer-managed keys (CMK) in Azure, or CMK in customer-controlled hardware.
+- Client-side encryption that enables you to manage and store keys on-premises or in another secure location.
+
+Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Revoking or deleting encryption keys renders the corresponding data inaccessible. **If you require extra security for your most sensitive customer data stored in Azure services, you can encrypt it using your own encryption keys you control in Azure Key Vault.**
+
+### FIPS 140 validated cryptography
+
+The [Federal Information Processing Standard (FIPS) 140](https://csrc.nist.gov/publications/detail/fips/140/3/final) is a US government standard that defines minimum security requirements for cryptographic modules in information technology products. Microsoft maintains an active commitment to meeting the [FIPS 140 requirements](/azure/compliance/offerings/offering-fips-140-2), having validated cryptographic modules since the standardΓÇÖs inception in 2001. Microsoft validates its cryptographic modules under the US National Institute of Standards and Technology (NIST) [Cryptographic Module Validation Program](https://csrc.nist.gov/Projects/cryptographic-module-validation-program) (CMVP). Multiple Microsoft products, including many cloud services, use these cryptographic modules.
+
+While the current CMVP FIPS 140 implementation guidance precludes a FIPS 140 validation for a cloud service, cloud service providers can obtain and operate FIPS 140 validated cryptographic modules for the computing elements that comprise their cloud services. Azure is built with a combination of hardware, commercially available operating systems (Linux and Windows), and Azure-specific version of Windows. Through the Microsoft [Security Development Lifecycle](https://www.microsoft.com/securityengineering/sdl/) (SDL), all Azure services use FIPS 140 approved algorithms for data security because the operating system uses FIPS 140 approved algorithms while operating at a hyper scale cloud. The corresponding crypto modules are FIPS 140 validated as part of the Microsoft [Windows FIPS validation program](/windows/security/threat-protection/fips-140-validation#modules-used-by-windows-server). Moreover, you can store your own cryptographic keys and other secrets in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys](../security/fundamentals/encryption-models.md#server-side-encryption-using-customer-managed-keys-in-azure-key-vault).
+
+### Encryption key management
+
+Proper protection and management of encryption keys is essential for data security. [Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets. Key Vault enables you to store your encryption keys in hardware security modules (HSMs) that are FIPS 140 validated. For more information, see [Data encryption key management](./azure-secure-isolation-guidance.md#data-encryption-key-management).
+
+With Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. Keys generated inside the Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your cryptographic keys.**
+
+### Data encryption in transit
+
+Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). Data encryption in transit isolates your network traffic from other traffic and helps protect data from interception. For more information, see [Data encryption in transit](./azure-secure-isolation-guidance.md#data-encryption-in-transit).
+
+### Data encryption at rest
+
+Azure provides extensive options for [encrypting data at rest](../security/fundamentals/encryption-atrest.md) to help you safeguard your data and meet your compliance needs using both Microsoft-managed encryption keys and customer-managed encryption keys. This process relies on multiple encryption keys and services such as Azure Key Vault and Azure Active Directory to ensure secure key access and centralized key management. For more information about Azure Storage encryption and Azure Disk encryption, see [Data encryption at rest](./azure-secure-isolation-guidance.md#data-encryption-at-rest).
+
+Azure SQL Database provides [transparent data encryption](../azure-sql/database/transparent-data-encryption-tde-overview.md) (TDE) at rest by [default](https://azure.microsoft.com/updates/newly-created-azure-sql-databases-encrypted-by-default/). TDE performs real-time encryption and decryption operations on the data and log files. Database Encryption Key (DEK) is a symmetric key stored in the database boot record for availability during recovery. It's secured via a certificate stored in the master database of the server or an asymmetric key called TDE Protector stored under your control in [Azure Key Vault](../key-vault/general/security-features.md). Key Vault supports [bring your own key](../azure-sql/database/transparent-data-encryption-byok-overview.md) (BYOK), which enables you to store the TDE Protector in Key Vault and control key management tasks including key rotation, permissions, deleting keys, enabling auditing/reporting on all TDE Protectors, and so on. The key can be generated by the Key Vault, imported, or [transferred to the Key Vault from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). You can also use the [Always Encrypted](../azure-sql/database/always-encrypted-azure-key-vault-configure.md) feature of Azure SQL Database, which is designed specifically to help protect sensitive data by allowing you to encrypt data inside your applications and [never reveal the encryption keys to the database engine](/sql/relational-databases/security/encryption/always-encrypted-database-engine). In this manner, Always Encrypted provides separation between those users who own the data (and can view it) and those users who manage the data (but should have no access).
+
+### Data encryption in use
+
+Microsoft enables you to protect your data throughout its entire lifecycle: at rest, in transit, and in use. **Azure confidential computing** is a set of data security capabilities that offers encryption of data while in use. With this approach, when data is in the clear, which is needed for efficient data processing in memory, the data is protected inside a hardware-based trusted execution environment (TEE), also known as an enclave.
+
+Technologies like [Intel Software Guard Extensions](https://software.intel.com/sgx) (Intel SGX), or [AMD Secure Encrypted Virtualization](https://www.amd.com/en/processors/amd-secure-encrypted-virtualization) (SEV-SNP) are recent CPU improvements supporting confidential computing implementations. These technologies are designed as virtualization extensions and provide feature sets including memory encryption and integrity, CPU-state confidentiality and integrity, and attestation. For more information, see [Azure confidential computing](../confidential-computing/index.yml) documentation.
+
+## Restrictions on insider access
+
+Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to your systems and data. For more information on how Microsoft restricts insider access to your data, see [Restrictions on insider access](./documentation-government-plan-security.md#restrictions-on-insider-access).
+
+All Azure and Azure Government employees in the United States are subject to Microsoft background checks. For more information, see [Screening](./documentation-government-plan-security.md#screening). Azure Government provides you with an extra layer of protection through contractual commitments regarding storage of your data in the United States and limiting potential access to systems processing your data to screened US persons that have completed fingerprint background checks and criminal records checks to address CJIS requirements.
-From devices to the cloud, Microsoft puts privacy and information security first, while increasing productivity for officers in the field and throughout the department. By combining highly secure mobile devices with "anytime-anywhere" access to the cloud, JPS agencies can contribute to ongoing investigations, analyze data, manage evidence, and help protect citizens from threats.
+## Monitoring your Azure resources
-Other cloud providers treat Criminal Justice Information Systems (CJIS) compliance as a check box, rather than a commitment. At Microsoft, we're committed to providing solutions that meet the applicable CJIS controls, today and in the future. In addition, we extend our commitment to justice and public safety through our <a href="https://news.microsoft.com/presskits/dcu/#sm.0000eqdq0pxj4ex3u272bevclb0uc#KwSv0iLdMkJerFly.97">Digital Crimes Unit</a>, Cyber Defense Operations Center, and <a href="https://enterprise.microsoft.com/en-us/industries/government/public-safety/">Worldwide Justice and Public Safety organization</a>.
+Azure provides essential services that you can use to gain in-depth insight into your provisioned Azure resources and get alerted about suspicious activity, including outside attacks aimed at your applications and data. For more information about these services, see [Customer monitoring of Azure resources](./documentation-government-plan-security.md#customer-monitoring-of-azure-resources).
## Next steps
-* <a href="https://www.microsoft.com/en-us/TrustCenter/Compliance/CJIS"> Microsoft Trust Center - Criminal Justice Information Services webpage</a>
-* <a href="https://blogs.msdn.microsoft.com/azuregov/">Microsoft Azure Government blog</a>
+- [Azure Security](../security/fundamentals/overview.md)
+- [Microsoft for public safety and justice](https://www.microsoft.com/industry/government/public-safety-and-justice)
+- [Microsoft government solutions](https://www.microsoft.com/enterprise/government)
+- [What is Azure Government?](./documentation-government-welcome.md)
+- [Explore Azure Government](https://azure.microsoft.com/global-infrastructure/government/)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Criminal Justice Information Services](https://www.fbi.gov/services/cjis) (CJIS)
+- [CJIS Security Policy](https://www.fbi.gov/services/cjis/cjis-security-policy-resource-center/view)
+- [Azure CJIS compliance offering](/azure/compliance/offerings/offering-cjis)
+- [Azure FedRAMP compliance offering](/azure/compliance/offerings/offering-fedramp)
+- [NIST SP 800-53](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final) *Security and Privacy Controls for Information Systems and Organizations*
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
Title: Facility Ontology in Microsoft Azure Maps Creator
description: Facility Ontology that describes the feature class definitions for Azure Maps Creator Previously updated : 06/14/2021 Last updated : 03/02/2022
The `unit` feature class defines a physical and non-overlapping area that can be
| Property | Type | Required | Description | |--|--|-|--|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| |`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.| |`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is assumed to be traversable by any navigating agent. |
The `unit` feature class defines a physical and non-overlapping area that can be
| Property | Type | Required | Description | |--|--|-|--|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| |`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.| |`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
The `structure` feature class defines a physical and non-overlapping area that c
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| | `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. |
The `zone` feature class defines a virtual area, like a WiFi zone or emergency a
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| | `setId` | string | true |Required for zone features that represent multi-level zones. The `setId` is the unique ID for a zone that spans multiple levels. The `setId` enables a zone with varying coverage on different floors to be represented with different geometry on different levels. The `setId` can be any string and is case-sensitive. It is recommended that the `setId` is a GUID. Maximum length allowed is 1000.| | `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
The `level` class feature defines an area of a building at a set elevation. For
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| | `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. | | `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. Maximum length allowed is 1000.|
The `facility` feature class defines the area of the site, building footprint, a
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| |`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. | |`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
The `verticalPenetration` class feature defines an area that, when used in a set
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| | `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are considered to be the same. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1000.| | `levelId` | [level.Id](#level) | true | The ID of a level feature. |
The `verticalPenetration` class feature defines an area that, when used in a set
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| | `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are connected. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1000. | | `levelId` | [level.Id](#level) | true | The ID of a level feature. |
The `opening` class feature defines a traversable boundary between two units, or
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` |[category.Id](#category) |true | The ID of a category feature.| | `levelId` | [level.Id](#level) | true | The ID of a level feature. | | `isConnectedToVerticalPenetration` | boolean | false | Whether or not this feature is connected to a `verticalPenetration` feature on one of its sides. Default value is `false`. |
The `opening` class feature defines a traversable boundary between two units, or
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` |[category.Id](#category) |true | The ID of a category feature.| | `levelId` | [level.Id](#level) | true | The ID of a level feature. | |`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) y that represents the feature as a point. Can be used to position the label of the feature.|
The `directoryInfo` object class feature defines the name, address, phone number
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`streetAddress` |string |false |Street address part of the address. Maximum length allowed is 1000. | |`unit` |string |false |Unit number part of the address. Maximum length allowed is 1000. | |`locality`| string| false |The locality of the address. For example: city, municipality, village. Maximum length allowed is 1000.|
The `pointElement` is a class feature that defines a point feature in a unit, su
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.| | `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1000.| | `isObstruction` | boolean (Default value is `null`.) | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
The `lineElement` is a class feature that defines a line feature in a unit, such
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.| | `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1000. | | `isObstruction` | boolean (Default value is `null`.)| false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
The `areaElement` is a class feature that defines a polygon feature in a unit, s
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.| | `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1000. | | `isObstruction` | boolean | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
The `category` class feature defines category names. For example: "room.conferen
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The category's original ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The category's original ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`name` | string | true | Name of the category. Suggested to use "." to represent hierarchy of categories. For example: "room.conference", "room.privateoffice". Maximum length allowed is 1000. | | `routeThroughBehavior` | boolean | false | Determines whether a feature can be used for through traffic.| |`isRoutable` | boolean (Default value is `null`.) | false | Determines if a feature should be part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
The `category` class feature defines category names. For example: "room.conferen
| Property | Type | Required | Description | |--||-|-|
-|`originalId` | string |true | The category's original ID derived from client data. Maximum length allowed is 1000.|
-|`externalId` | string |true | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
+|`originalId` | string |false | The category's original ID derived from client data. Maximum length allowed is 1000.|
+|`externalId` | string |false | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1000.|
|`name` | string | true | Name of the category. Suggested to use "." to represent hierarchy of categories. For example: "room.conference", "room.privateoffice". Maximum length allowed is 1000. | :::zone-end
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
# Manage the Azure Monitor agent
-This article provides the different options currently available to install, uninstall and update the [Azure Monitor agent](azure-monitor-agent-overview.md). This agent extension can be installed on Azure virtual machines, scale sets and Azure Arc-enabled servers. It also lists the options to create [associations with data collection rules](data-collection-rule-azure-monitor-agent.md) that define which data the agent should collect.
+This article provides the different options currently available to install, uninstall and update the [Azure Monitor agent](azure-monitor-agent-overview.md). This agent extension can be installed on Azure virtual machines, scale sets and Azure Arc-enabled servers. It also lists the options to create [associations with data collection rules](data-collection-rule-azure-monitor-agent.md) that define which data the agent should collect. Installing, upgrading, or uninstalling the Azure Monitor Agent will not require you to restart your server.
## Virtual machine extension details The Azure Monitor agent is implemented as an [Azure VM extension](../../virtual-machines/extensions/overview.md) with the details in the following table. It can be installed using any of the methods to install virtual machine extensions including those described in this article.
We strongly recommended to update to generally available versions listed as foll
| August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>Hotfix</sup> | | September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Fixed issue for Arc Windows servers</li></ul> | 1.1.3.2<sup>Hotfix</sup> | 1.12.2.0 <sup>1</sup> | | December 2021 | <ul><li>Fixed issues impacting Linux Arc-enabled servers</li><li>'Heartbeat' table > 'Category' column reports "Azure Monitor Agent" in Log Analytics for Windows</li></ul> | 1.1.4.0 | 1.14.7.0<sup>2</sup> |
-| January 2021 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><ul> | Not available yet | 1.15.2.0<sup>Hotfix</sup> |
+| January 2021 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><li>Fixed installation failure on Windows Server 2008 R2 SP1</li></ul> | 1.1.5.1<sup>Hotfix</sup> | 1.15.2.0<sup>Hotfix</sup> |
-<sup>Hotfix</sup> Do not use AMA Linux versions v1.10.7, v1.15.1 and AMA Windows v1.1.3.1. Please use hotfixed versions listed above.
+<sup>Hotfix</sup> Do not use AMA Linux versions v1.10.7, v1.15.1 and AMA Windows v1.1.3.1, v1.1.5.0. Please use hotfixed versions listed above.
<sup>1</sup> Known issue: No data collected from Linux Arc-enabled servers <sup>2</sup> Known issue: Linux performance counters data stops flowing on restarting/rebooting the machine(s)
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Title: Configure data collection for the Azure Monitor agent description: Describes how to create a data collection rule to collect data from virtual machines using the Azure Monitor agent. Previously updated : 07/16/2021 Last updated : 03/1/2022
To specify additional filters, you must use Custom configuration and specify an
See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations) for a list of limitations in the XPath supported by Windows event log. > [!TIP]
-> Use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery. The following script shows an example.
+> Use this **shortcut** to create syntactically correct XPath queries: [Extract XPath queries from Windows Event Viewer](https://azurecloudai.blog/2021/08/10/shortcut-way-to-create-your-xpath-queries-for-azure-sentinel-dcrs/)
+>
+> Alternatively you can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery. The following script shows an example.
> > ```powershell > $XPath = '*[System[EventID=1035]]'
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
import { DefaultAzureCredential } from "@azure/identity";
const credential = new DefaultAzureCredential(); appInsights.setup("InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/").start();
-appInsights.defaultClient.aadTokenCredential = credential;
+appInsights.defaultClient.config.aadTokenCredential = credential;
```
const credential = new ClientSecretCredential(
"<YOUR_CLIENT_SECRET>" ); appInsights.setup("InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://xxxx.applicationinsights.azure.com/").start();
-appInsights.defaultClient.aadTokenCredential = credential;
+appInsights.defaultClient.config.aadTokenCredential = credential;
```
azure-monitor Data Model Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-context.md
Originally this field was used to indicate the type of the device the end user o
Max length: 64
-## Operation id
+## Operation ID
A unique identifier of the root operation. This identifier allows to group telemetry across multiple components. See [telemetry correlation](./correlation.md) for details. The operation id is created by either a request or a page view. All other telemetry sets this field to the value for the containing request or page view.
Max length: 64
## Anonymous user id
-Anonymous user id. Represents the end user of the application. When telemetry is sent from a service, the user context is about the user that initiated the operation in the service.
+Anonymous user ID. Represents the end user of the application. When telemetry is sent from a service, the user context is about the user that initiated the operation in the service.
-[Sampling](./sampling.md) is one of the techniques to minimize the amount of collected telemetry. Sampling algorithm attempts to either sample in or out all the correlated telemetry. Anonymous user id is used for sampling score generation. So anonymous user id should be a random enough value.
+[Sampling](./sampling.md) is one of the techniques to minimize the amount of collected telemetry. Sampling algorithm attempts to either sample in or out all the correlated telemetry. Anonymous user ID is used for sampling score generation. So anonymous user ID should be a random enough value.
-Using anonymous user id to store user name is a misuse of the field. Use Authenticated user id.
+Using anonymous user ID to store user name is a misuse of the field. Use Authenticated user ID.
Max length: 128
-## Authenticated user id
+## Authenticated user ID
-Authenticated user id. The opposite of anonymous user id, this field represents the user with a friendly name. This is only collected by default with the ASP.NET Framework SDK's [`AuthenticatedUserIdTelemetryInitializer`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/WEB/Src/Web/Web/AuthenticatedUserIdTelemetryInitializer.cs).
+Authenticated user ID. The opposite of anonymous user ID, this field represents the user with a friendly name. This is only collected by default with the ASP.NET Framework SDK's [`AuthenticatedUserIdTelemetryInitializer`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/WEB/Src/Web/Web/AuthenticatedUserIdTelemetryInitializer.cs).
Max length: 1024
-## Account id
+## Account ID
-In multi-tenant applications this is the account ID or name, which the user is acting with. Examples may be subscription ID for Azure portal or blog name for a blogging platform.
+In multi-tenant applications this is the tenant account ID or name, which the user is acting with. It is used for additional user segmentation when user ID and authenticated user ID are not sufficient. For example, a subscription ID for Azure portal or the blog name for a blogging platform.
Max length: 1024
azure-monitor Javascript React Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-plugin.md
npm install @microsoft/applicationinsights-react-js @microsoft/applicationinsigh
Initialize a connection to Application Insights: ```javascript
-// AppInsights.js
+import React from 'react';
import { ApplicationInsights } from '@microsoft/applicationinsights-web';
-import { ReactPlugin } from '@microsoft/applicationinsights-react-js';
-import { createBrowserHistory } from 'history';
-
+import { ReactPlugin, withAITracking } from '@microsoft/applicationinsights-react-js';
+import { createBrowserHistory } from "history";
const browserHistory = createBrowserHistory({ basename: '' });
-const reactPlugin = new ReactPlugin();
-const appInsights = new ApplicationInsights({
+var reactPlugin = new ReactPlugin();
+var appInsights = new ApplicationInsights({
config: { instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE', extensions: [reactPlugin],
const appInsights = new ApplicationInsights({
} }); appInsights.loadAppInsights();
-export { reactPlugin, appInsights };
``` Wrap your component with the higher-order component function to enable Application Insights on it:
class MyComponent extends React.Component {
export default withAITracking(reactPlugin, MyComponent); ```
+For `react-router v6` or other scenarios where router history is not exposed, appInsights config `enableAutoRouteTracking` can be used to auto track router changes:
+
+```javascript
+var reactPlugin = new ReactPlugin();
+var appInsights = new ApplicationInsights({
+ config: {
+ instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE',
+ enableAutoRouteTracking: true,
+ extensions: [reactPlugin]
+ }
+ }
+});
+appInsights.loadAppInsights();
+```
+ ## Configuration | Name | Default | Description |
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/proactive-performance-diagnostics.md
Smart detection notifications are enabled by default. They are sent to users tha
![Smart Detection Settings](media/proactive-performance-diagnostics/smart_detection_configuration.png)
- * You can use the **unsubscribe** link in the smart detection email to stop receiving the email notifications.
+ * You can disable the default notification, and replace it with a specified list of emails.
Emails about smart detection performance anomalies are limited to one email per day per Application Insights resource. The email will be sent only if there is at least one new issue that was detected on that day. You won't get repeats of any message.
These diagnostic tools help you inspect the telemetry from your app:
Smart detection is automatic. But maybe you'd like to set up some more alerts? * [Manually configured metric alerts](../alerts/alerts-log.md)
-* [Availability web tests](./monitor-web-app-availability.md)
+* [Availability web tests](./monitor-web-app-availability.md)
azure-monitor Rest Api Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/rest-api-walkthrough.md
This article shows you how to perform authentication so your code can use the [M
The Azure Monitor API makes it possible to programmatically retrieve the available default metric definitions, dimension values, and metric values. The data can be saved in a separate data store such as Azure SQL Database, Azure Cosmos DB, or Azure Data Lake. From there additional analysis can be performed as needed.
-Besides working with various metric data points, the Monitor API also makes it possible to list alert rules, view activity logs, and much more. For a full list of available operations, see the [Microsoft Azure Monitor REST API Reference](/rest/api/monitor/).
+Besides working with various metric data points, the Monitor API also makes it possible to list alert rules, view activity logs, and much more. For a full list of available operations, see the [Microsoft Azure Monitor REST API Reference](/rest/api/monitor/).
-## Authenticating Azure Monitor requests
-
-The first step is to authenticate the request.
+## Authenticating Azure Monitor requests
+
All the tasks executed against the Azure Monitor API use the Azure Resource Manager authentication model. Therefore, all requests must be authenticated with Azure Active Directory (Azure AD). One approach to authenticate the client application is to create an Azure AD service principal and retrieve the authentication (JWT) token. The following sample script demonstrates creating an Azure AD service principal via PowerShell. For a more detailed walk-through, refer to the documentation on [using Azure PowerShell to create a service principal to access resources](/powershell/azure/create-azure-service-principal-azureps). It is also possible to [create a service principal via the Azure portal](../../active-directory/develop/howto-create-service-principal-portal.md).
New-AzRoleAssignment -RoleDefinitionName Reader `
```
-To query the Azure Monitor API, the client application should use the previously created service principal to authenticate. The following example PowerShell script shows one approach, using the [Active Directory Authentication Library](../../active-directory/azuread-dev/active-directory-authentication-libraries.md) (ADAL) to obtain the JWT authentication token. The JWT token is passed as part of an HTTP Authorization parameter in requests to the Azure Monitor REST API.
+To query the Azure Monitor API, the client application should use the previously created service principal to authenticate. The following example PowerShell script shows one approach, using the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview) to obtain the authentication token.
```powershell
-$azureAdApplication = Get-AzADApplication -IdentifierUri "https://localhost/azure-monitor"
-
-$subscription = Get-AzSubscription -SubscriptionId $subscriptionId
-
-$clientId = $azureAdApplication.ApplicationId.Guid
-$tenantId = $subscription.TenantId
-$authUrl = "https://login.microsoftonline.com/${tenantId}"
-
-$AuthContext = [Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext]$authUrl
-$cred = New-Object -TypeName Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential -ArgumentList ($clientId, $pwd)
-
-$result = $AuthContext.AcquireTokenAsync("https://management.core.windows.net/", $cred).GetAwaiter().GetResult()
-
-# Build an array of HTTP header values
-$authHeader = @{
-'Content-Type'='application/json'
-'Accept'='application/json'
-'Authorization'=$result.CreateAuthorizationHeader()
+$ClientID = "{client_id}"
+$loginURL = "https://login.microsoftonline.com"
+$tenantdomain = "{tenant_id}"
+$CertPassWord = "{password_for_cert}"
+$certPath = "C:\temp\Certs\testCert_01.pfx"
+
+[string[]] $Scopes = "https://graph.microsoft.com/.default"
+
+Function Load-MSAL {
+ if ($PSVersionTable.PSVersion.Major -gt 5)
+ {
+ $core = $true
+ $foldername = "netcoreapp2.1"
+ }
+ else
+ {
+ $core = $false
+ $foldername = "net45"
+ }
+
+ # Download MSAL.Net module to a local folder if it does not exist there
+ if ( ! (Get-ChildItem $HOME/MSAL/lib/Microsoft.Identity.Client.* -erroraction ignore) ) {
+ install-package -Source nuget.org -ProviderName nuget -SkipDependencies Microsoft.Identity.Client -Destination $HOME/MSAL/lib -force -forcebootstrap | out-null
+ }
+
+ # Load the MSAL assembly -- needed once per PowerShell session
+ [System.Reflection.Assembly]::LoadFrom((Get-ChildItem $HOME/MSAL/lib/Microsoft.Identity.Client.*/lib/$foldername/Microsoft.Identity.Client.dll).fullname) | out-null
+ }
+
+Function Get-GraphAccessTokenFromMSAL {
+
+ Load-MSAL
+
+ $global:app = $null
+
+ $x509cert = [System.Security.Cryptography.X509Certificates.X509Certificate2] (GetX509Certificate_FromPfx -CertificatePath $certPath -CertificatePassword $CertPassWord)
+ write-host "Cert = {$x509cert}"
+
+ $ClientApplicationBuilder = [Microsoft.Identity.Client.ConfidentialClientApplicationBuilder]::Create($ClientID)
+ [void]$ClientApplicationBuilder.WithAuthority($("$loginURL/$tenantdomain"))
+ [void]$ClientApplicationBuilder.WithCertificate($x509cert)
+ $global:app = $ClientApplicationBuilder.Build()
+
+ [Microsoft.Identity.Client.AuthenticationResult] $authResult = $null
+ $AquireTokenParameters = $global:app.AcquireTokenForClient($Scopes)
+ try {
+ $authResult = $AquireTokenParameters.ExecuteAsync().GetAwaiter().GetResult()
+ }
+ catch {
+ $ErrorMessage = $_.Exception.Message
+ Write-Host $ErrorMessage
+ }
+
+ return $authResult
+}
+
+function GetX509Certificate_FromPfx($CertificatePath, $CertificatePassword){
+ #write-host "Path: '$CertificatePath'"
+
+ if(![System.IO.Path]::IsPathRooted($CertificatePath))
+ {
+ $LocalPath = Get-Location
+ $CertificatePath = "$LocalPath\$CertificatePath"
+ }
+
+ #Write-Host "Looking for '$CertificatePath'"
+
+ $certificate = [System.Security.Cryptography.X509Certificates.X509Certificate2]::new($CertificatePath, $CertificatePassword)
+
+ Return $certificate
}
+
+$myvar = Get-GraphAccessTokenFromMSAL
+Write-Host "Access Token: " $myvar.AccessToken
+
```
+Loading the certificate from a .pfx file in PowerShell can make it easier for an admin to manage certificates without having to install the certificate in the certificate store. However, this should not be done on a client machine as the user could potentially discover the file and also the password for it, as well as the method to authenticate. The client credentials flow is only intended to be ran in a back-end service to service type of scenario where only admins have access to the machine.
After authenticating, queries can then be executed against the Azure Monitor REST API. There are two helpful queries:
For example, to retrieve the metric definitions for an Azure Storage account, th
```powershell $request = "https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/azmon-rest-api-walkthrough/providers/Microsoft.Storage/storageAccounts/ContosoStorage/providers/microsoft.insights/metricDefinitions?api-version=2018-01-01"
-Invoke-RestMethod -Uri $request `
+
+Invoke-RestMethod -Uri $request `
-Headers $authHeader ` -Method Get ` -OutFile ".\contosostorage-metricdef-results.json" `
azure-monitor Custom Logs Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-migrate.md
+
+ Title: Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs
+description: Steps that you must perform when migrating from Data Collector API and custom fields-enabled tables to DCR-based custom logs.
+ Last updated : 01/06/2022+++
+# Migrate from Data Collector API and custom fields-enabled tables to DCR-based custom logs
+This article describes how to migrate from [Data Collector API](data-collector-api.md) or [custom fields](custom-fields.md) in Azure Monitor to [DCR-based custom logs](custom-logs-overview.md). It includes configuration required for tables in your Log Analytics workspace and applies to both [direct ingestion](custom-logs-overview.md) and [ingestion-time transformations](ingestion-time-transformations.md).
+
+> [!IMPORTANT]
+> You do not need to follow this article if you are defining your DCR-based custom logs using the Azure Portal. This article only applies if you are using Resource Manager templates and the custom logs API.
+
+## Background
+To use a table with the [direct ingestion](custom-logs-overview.md), and [ingestion-time transformations](ingestion-time-transformations.md), it must be configured to support these new features. When you complete the process described in this article, the following actions are taken:
+
+- The table will be reconfigured to enable all DCR-based custom logs features. This includes DCR and DCE support and management with the new Tables control plane.
+- Any previously defined custom fields will stop populating.
+- The Data Collector API will continue to work but won't create any new columns. Data will only populate into any columns that was created prior to migration.
+- The schema and historic data is preserved and can be accessed the same way it was previously.
+
+## Applicable scenarios
+This article is only applicable if all of the following criteria apply:
+
+- You need to use the DCR-based custom logs functionality to send data to an existing table, preserving both schema and historical data in that table.
+- The table in question was either created using the Data Collector API, or has custom fields defined in it
+- You want to migrate using the custom logs API instead of the Azure portal.
+
+If all of these conditions aren't true, then you can use DCR-based custom logs without following the procedure described here.
+
+## Migration procedure
+If the table that you're targeting with DCR-based custom logs does indeed falls under the criteria described above, the following strategy is required for a graceful migration:
+
+1. Configure your data collection rule (DCR) following procedures at [Send custom logs to Azure Monitor Logs using Resource Manager templates (preview)](tutorial-custom-logs-api.md) or [Add ingestion-time transformation to Azure Monitor Logs using Resource Manager templates (preview)](tutorial-ingestion-time-transformations-api.md).
+
+1. If using the DCR-based API, also [configure the data collection endpoint (DCE)](tutorial-custom-logs-api.md#create-data-collection-endpoint) and the agent or component that will be sending data to the API.
+
+1. Issue the following API call against your table. This call is idempotent, so there will be no effect if the table has already been migrated.
+
+ ```rest
+ POST /subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.operationalinsights/workspaces/{workspaceName}/tables/{tableName}/migrate?api-version=2021-03-01-privatepreview
+ ```
+
+1. Discontinue use of the Data Collector API and start using the new custom logs API.
+
+## Next steps
+
+- [Walk through a tutorial sending custom logs using the Azure portal.](tutorial-custom-logs.md)
+- [Walk through a tutorial sending custom logs using Resource Manager templates and REST API.](tutorial-custom-logs-api.md)
azure-monitor Custom Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-overview.md
Last updated 01/06/2022
# Custom logs API in Azure Monitor Logs (Preview)
-With the DCR based custom logs API in Azure Monitor, you can send data to a Log Analytics workspace from any REST API client. This allows you to send data from virtually any source to [supported built-in tables](tables-feature-support.md) or to custom tables that you create. You can even extend the schema of built-in tables with custom columns.
+With the DCR based custom logs API in Azure Monitor, you can send data to a Log Analytics workspace from any REST API client. This allows you to send data from virtually any source to [supported built-in tables](#tables) or to custom tables that you create. You can even extend the schema of built-in tables with custom columns.
[!INCLUDE [Sign up for preview](../../../includes/azure-monitor-custom-logs-signup.md)]
For limits related to custom logs, see [Azure Monitor service limits](../service
## Next steps - [Walk through a tutorial sending custom logs using the Azure portal.](tutorial-custom-logs.md)-- [Walk through a tutorial sending custom logs using Resource Manager templates and REST API.](tutorial-custom-logs-api.md)
+- [Walk through a tutorial sending custom logs using Resource Manager templates and REST API.](tutorial-custom-logs-api.md)
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 02/09/2022 Last updated : 03/02/2022 # Resource limits for Azure NetApp Files
The following table describes resource limits for Azure NetApp Files:
| Number of volumes per capacity pool | 500 | Yes | | Number of snapshots per volume | 255 | No | | Number of IPs in a VNet (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | 1000 | No |
-| Number of used IPs in a VNet (including immediately peered VNets) with Azure NetApp Files | 1000 | No |
| Minimum size of a single capacity pool | 4 TiB | No | | Maximum size of a single capacity pool | 500 TiB | No | | Minimum size of a single volume | 100 GiB | No |
azure-percept Voice Control Your Inventory Then Visualize With Power Bi Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/voice-control-your-inventory-then-visualize-with-power-bi-dashboard.md
customer can do.
## Next steps
-Check out the other tutorial under Advanced prototyping with Azure Percept section for your Azure Percept DK.
-
+Check out the tutorial [Create a people counting solution with Azure Percept Vision](./create-people-counting-solution-with-azure-percept-devkit-vision.md).
<!-- Remove all the comments in this template before you sign-off or merge to the
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
description: Describes the functions to use in a Bicep file to retrieve values a
Previously updated : 12/28/2021 Last updated : 03/02/2022 # Resource functions for Bicep
param subscriptionId string
param kvResourceGroup string param kvName string
-resource kv 'Microsoft.KeyVault/vaults@2019-09-01' existing = {
+resource keyVault 'Microsoft.KeyVault/vaults@2019-09-01' existing = {
name: kvName scope: resourceGroup(subscriptionId, kvResourceGroup ) }
module sql './sql.bicep' = {
params: { sqlServerName: sqlServerName adminLogin: adminLogin
- adminPassword: kv.getSecret('vmAdminPassword')
+ adminPassword: keyVault.getSecret('vmAdminPassword')
} } ```
You can call a list function for any resource type with an operation that starts
The syntax for this function varies by the name of the list operation. The returned values also vary by operation. Bicep doesn't currently support completions and validation for `list*` functions.
-With **Bicep version 0.4.412 or later**, you call the list function by using the [accessor operator](operators-access.md#function-accessor). For example, `stg.listKeys()`.
+With **Bicep version 0.4.412 or later**, you call the list function by using the [accessor operator](operators-access.md#function-accessor). For example, `storageAccount.listKeys()`.
A [namespace qualifier](bicep-functions.md#namespaces-for-functions) isn't needed because the function is used with a resource type.
Other `list` functions have different return formats. To see the format of a fun
The following example deploys a storage account and then calls `listKeys` on that storage account. The key is used when setting a value for [deployment scripts](../templates/deployment-script-template.md). ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' = {
name: 'dscript${uniqueString(resourceGroup().id)}' location: location kind: 'StorageV2'
resource dScript 'Microsoft.Resources/deploymentScripts@2019-10-01-preview' = {
properties: { azCliVersion: '2.0.80' storageAccountSettings: {
- storageAccountName: stg.name
- storageAccountKey: stg.listKeys().keys[0].value
+ storageAccountName: storageAccount.name
+ storageAccountKey: storageAccount.listKeys().keys[0].value
} ... }
param accountSasProperties object {
} } ...
-sasToken: stg.listAccountSas('2021-04-01', accountSasProperties).accountSasToken
+sasToken: storageAccount.listAccountSas('2021-04-01', accountSasProperties).accountSasToken
``` ### Implementations
Namespace: [az](bicep-functions.md#namespaces-for-functions).
The reference function is available in Bicep files, but typically you don't need it. Instead, use the symbolic name for the resource.
-The following example deploys a storage account. It uses the symbolic name `stg` for the storage account to return a property.
+The following example deploys a storage account. It uses the symbolic name `storageAccount` for the storage account to return a property.
```bicep param storageAccountName string
-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' = {
name: storageAccountName location: 'eastus' kind: 'Storage'
resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
} }
-output storageEndpoint object = stg.properties.primaryEndpoints
+output storageEndpoint object = storageAccount.properties.primaryEndpoints
``` To get a property from an existing resource that isn't deployed in the template, use the `existing` keyword:
To get a property from an existing resource that isn't deployed in the template,
```bicep param storageAccountName string
-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' existing = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' existing = {
name: storageAccountName } // use later in template as often as needed
-output blobAddress string = stg.properties.primaryEndpoints.blob
+output blobAddress string = storageAccount.properties.primaryEndpoints.blob
```
-If you attempt to reference a resource that doesn't exist, you get the `NotFound` error and your deployment fails.
+To reference a resource that is nested inside a parent resource, use the [nested accessor](operators-access.md#nested-resource-accessor) (`::`). You only use this syntax when you're accessing the nested resource from outside of the parent resource.
+
+```bicep
+vNet1::subnet1.properties.addressPrefix
+```
-For more information, see [Reference resources](./compare-template-syntax.md#reference-resources) and the [JSON template reference function](../templates/template-functions-resource.md#reference).
+If you attempt to reference a resource that doesn't exist, you get the `NotFound` error and your deployment fails.
## resourceId
For example:
```bicep param storageAccountName string
-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' = {
name: storageAccountName location: 'eastus' kind: 'Storage'
resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
} }
-output storageID string = stg.id
+output storageID string = storageAccount.id
``` To get the resource ID for a resource that isn't deployed in the Bicep file, use the existing keyword.
To get the resource ID for a resource that isn't deployed in the Bicep file, use
```bicep param storageAccountName string
-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' existing = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' existing = {
name: storageAccountName }
-output storageID string = stg.id
+output storageID string = storageAccount.id
``` For more information, see the [JSON template resourceId function](../templates/template-functions-resource.md#resourceid)
var roleDefinitionId = {
} }
-resource myRoleAssignment 'Microsoft.Authorization/roleAssignments@2018-09-01-preview' = {
+resource roleAssignment 'Microsoft.Authorization/roleAssignments@2018-09-01-preview' = {
name: guid(resourceGroup().id, principalId, roleDefinitionId[builtInRoleType].id) properties: { roleDefinitionId: roleDefinitionId[builtInRoleType].id
param policyDefinitionID string = '0a914e76-4921-4c19-b460-a2d36003525a'
@description('Specifies the name of the policy assignment, can be used defined or an idempotent name as the defaultValue provides.') param policyAssignmentName string = guid(policyDefinitionID, resourceGroup().name)
-resource myPolicyAssignment 'Microsoft.Authorization/policyAssignments@2019-09-01' = {
+resource policyAssignment 'Microsoft.Authorization/policyAssignments@2019-09-01' = {
name: policyAssignmentName properties: { scope: subscriptionResourceId('Microsoft.Resources/resourceGroups', resourceGroup().name)
azure-resource-manager Compare Template Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/compare-template-syntax.md
description: Compares Azure Resource Manager templates developed with JSON and B
Previously updated : 01/21/2022 Last updated : 03/01/2022 # Comparing JSON and Bicep for templates
func()
To declare a parameter with a default value: ```bicep
-param demoParam string = 'Contoso'
+param orgName string = 'Contoso'
``` ```json "parameters": {
- "demoParam": {
+ "orgName": {
"type": "string", "defaultValue": "Contoso" } } ```
-To get a parameter value:
+To get a parameter value, use the name you defined:
```bicep
-demoParam
+name: orgName
``` ```json
-[parameters('demoParam'))]
+"name": "[parameters('orgName'))]"
``` ## Variables
demoParam
To declare a variable: ```bicep
-var demoVar = 'example value'
+var description = 'example value'
``` ```json "variables": {
- "demoVar": "example value"
+ "description": "example value"
}, ```
-To get a variable value:
+To get a variable value, use the name you defined:
```bicep
-demoVar
+workloadSetting: description
``` ```json
-[variables('demoVar'))]
+"workloadSetting": "[variables('demoVar'))]"
``` ## Strings
demoVar
To concatenate strings: ```bicep
-'${namePrefix}-vm'
+name: '${namePrefix}-vm'
``` ```json
-[concat(parameters('namePrefix'), '-vm')]
+"name": "[concat(parameters('namePrefix'), '-vm')]"
``` ## Logical operators
targetScope = 'subscription'
To declare a resource: ```bicep
-resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = {
+resource virtualMachine 'Microsoft.Compute/virtualMachines@2020-06-01' = {
... } ```
resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = {
To conditionally deploy a resource: ```bicep
-resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = if(deployVM) {
+resource virtualMachine 'Microsoft.Compute/virtualMachines@2020-06-01' = if(deployVM) {
... } ```
nic1.id
To iterate over items in an array or count: ```bicep
-[for storageName in storageAccounts: {
+[for storageName in storageAccountNames: {
... }] ```
To iterate over items in an array or count:
```json "copy": { "name": "storagecopy",
- "count": "[length(parameters('storageAccounts'))]"
+ "count": "[length(parameters('storageAccountNames'))]"
}, ... ```
For Bicep, you can set an explicit dependency but this approach isn't recommende
The following shows a network interface with an implicit dependency on a network security group. It references the network security group with `nsg.id`. ```bicep
-resource nsg 'Microsoft.Network/networkSecurityGroups@2020-06-01' = {
+resource netSecurityGroup 'Microsoft.Network/networkSecurityGroups@2020-06-01' = {
... }
resource nic1 'Microsoft.Network/networkInterfaces@2020-06-01' = {
If you must set an explicit dependence, use: ```bicep
-dependsOn: [ stg ]
+dependsOn: [ storageAccount ]
``` ```json
dependsOn: [ stg ]
To get a property from a resource in the template: ```bicep
-diagsAccount.properties.primaryEndpoints.blob
+storageAccount.properties.primaryEndpoints.blob
``` ```json
-[reference(resourceId('Microsoft.Storage/storageAccounts', variables('diagStorageAccountName'))).primaryEndpoints.blob]
+[reference(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))).primaryEndpoints.blob]
``` To get a property from an existing resource that isn't deployed in the template: ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' existing = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' existing = {
name: storageAccountName } // use later in template as often as needed
-stg.properties.primaryEndpoints.blob
+storageAccount.properties.primaryEndpoints.blob
``` ```json
stg.properties.primaryEndpoints.blob
"[reference(resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName')), '2019-06-01').primaryEndpoints.blob]" ```
+In Bicep, use the [nested accessor](operators-access.md#nested-resource-accessor) (`::`) to get a property on a resource nested within a parent resource:
+
+```bicep
+VNet1::Subnet1.properties.addressPrefix
+```
+
+For JSON, use reference function:
+
+```json
+[reference(resourceId('Microsoft.Network/virtualNetworks/subnets', variables('subnetName'))).properties.addressPrefix]
+```
+ ## Outputs To output a property from a resource in the template:
output hostname string = condition ? publicIP.properties.dnsSettings.fqdn : ''
} ```
-The Bicep ternary operator is the equivalent to the [`if` function](../templates/template-functions-logical.md#if) in an ARM template JSON, not the condition property. The ternary syntax has to evaluate to one value or the other. If the condition is false in the preceding samples, Bicep outputs a hostname with an empty string, but JSON outputs no values.
+The Bicep ternary operator is the equivalent to the [if function](../templates/template-functions-logical.md#if) in an ARM template JSON, not the condition property. The ternary syntax has to evaluate to one value or the other. If the condition is false in the preceding samples, Bicep outputs a hostname with an empty string, but JSON outputs no values.
## Code reuse
azure-resource-manager Manage Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md
Last updated 03/26/2019
-# Manage Azure Resource Manager resource groups by using the Azure portal
+# Manage Azure resource groups by using the Azure portal
Learn how to use the [Azure portal](https://portal.azure.com) with [Azure Resource Manager](overview.md) to manage your Azure resource groups. For managing Azure resources, see [Manage Azure resources by using the Azure portal](manage-resources-portal.md).
For information about exporting templates, see [Single and multi-resource export
- To learn Azure Resource Manager, see [Azure Resource Manager overview](overview.md). - To learn the Resource Manager template syntax, see [Understand the structure and syntax of Azure Resource Manager templates](../templates/syntax.md). - To learn how to develop templates, see the [step-by-step tutorials](../index.yml).-- To view the Azure Resource Manager template schemas, see [template reference](/azure/templates/).
+- To view the Azure Resource Manager template schemas, see [template reference](/azure/templates/).
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
To move to a new subscription, include a value for the `DestinationSubscriptionI
### Validate
-To test your move scenario without actually moving the resources, use the [az resource invoke-action](/cli/azure/resource#az_resource_invoke_action) command. Use this command only when you need to predetermine the results. To run this operation, you need the:
+To test your move scenario without actually moving the resources, use the [az resource invoke-action](/cli/azure/resource#az-resource-invoke-action) command. Use this command only when you need to predetermine the results. To run this operation, you need the:
* Resource ID of the source resource group * Resource ID of the target resource group
If validation fails, you see an error message describing why the resources can't
### Move
-To move existing resources to another resource group or subscription, use the [az resource move](/cli/azure/resource#az_resource_move) command. Provide the resource IDs of the resources to move. The following example shows how to move several resources to a new resource group. In the `--ids` parameter, provide a space-separated list of the resource IDs to move.
+To move existing resources to another resource group or subscription, use the [az resource move](/cli/azure/resource#az-resource-move) command. Provide the resource IDs of the resources to move. The following example shows how to move several resources to a new resource group. In the `--ids` parameter, provide a space-separated list of the resource IDs to move.
```azurecli webapp=$(az resource show -g OldRG -n ExampleSite --resource-type "Microsoft.Web/sites" --query id --output tsv)
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/doc-changes-updates-release-notes-whats-new.md
ms.devlang: Previously updated : 12/15/2021 Last updated : 03/02/2022 # What's new in Azure SQL Database? [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
The following table lists the features of Azure SQL Database that are currently
| [Query Store hints](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current&preserve-view=true) | Use query hints to optimize your query execution via the OPTION clause. | | [SQL Analytics](../../azure-monitor/insights/azure-sql.md)|Azure SQL Analytics is an advanced cloud monitoring solution for monitoring performance of all of your Azure SQL databases at scale and across multiple subscriptions in a single view. Azure SQL Analytics collects and visualizes key performance metrics with built-in intelligence for performance troubleshooting.| | [SQL insights](../../azure-monitor/insights/sql-insights-overview.md) | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance.|
-| [Zone redundant configuration for general purpose tier](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview) | The zone redundant configuration feature utilizes [Azure Availability Zones](../../availability-zones/az-overview.md#availability-zones) to replicate databases across multiple physical locations within an Azure region. By selecting [zone redundancy](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview), you can make your general purpose databases and elastic pools resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the application logic. The feature is currently only available in the general purpose tier. |
+| [Zone redundant configuration](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview) | The zone redundant configuration feature utilizes [Azure Availability Zones](../../availability-zones/az-overview.md#availability-zones) to replicate databases across multiple physical locations within an Azure region. By selecting [zone redundancy](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview), you can make your databases resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the application logic. **The feature is currently in preview for the General Purpose and Hyperscale service tiers.** |
||| ## General availability (GA)
The following table lists the features of Azure SQL Database that have transitio
Learn about significant changes to the Azure SQL Database documentation.
+### March 2022
+
+| Changes | Details |
+| | |
+| **Hyperscale zone redundant configuration preview** | It's now possible to create new Hyperscale databases with zone redundancy to make your databases resilient to a much larger set of failures. This feature is currently in preview for the Hyperscale service tier. To learn more, see [Hyperscale zone redundancy](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview). |
+| **Hyperscale storage redundancy GA** | Choosing your storage redundancy for your databases in the Hyperscale service tier is now generally available. See [Configure backup storage redundancy](automated-backups-overview.md#configure-backup-storage-redundancy) to learn more.
+|||
+ ### February 2022 | Changes | Details |
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/high-availability-sla.md
Previously updated : 1/27/2022 Last updated : 03/02/2022 # High availability for Azure SQL Database and SQL Managed Instance
The zone-redundant version of the high availability architecture for the general
![Zone redundant configuration for general purpose](./media/high-availability-sla/zone-redundant-for-general-purpose.png) > [!IMPORTANT]
-> Zone-redundant configuration is not available in SQL Managed Instance. In SQL Database this feature is only available when the Gen5 compute hardware is selected. Additionally, for serverless and provisioned general purpose tier, the zone-redundant configuration is only available in the following regions: East US, East US 2, West US 2, North Europe, West Europe, Southeast Asia, Australia East, Japan East, UK South, and France Central.
+> Zone-redundant configuration is not available in SQL Managed Instance. In SQL Database this feature is only available when the Gen5 hardware is selected. Additionally, for serverless and provisioned general purpose tier, the zone-redundant configuration is only available in the following regions: East US, East US 2, West US 2, North Europe, West Europe, Southeast Asia, Australia East, Japan East, UK South, and France Central.
> [!NOTE] > General Purpose databases with a size of 80 vcore may experience performance degradation with zone-redundant configuration. Additionally, operations such as backup, restore, database copy, setting up Geo-DR relationships, and downgrading a zone-redundant database from Business Critical to General Purpose may experience slower performance for any single databases larger than 1 TB. Please see our [latency documentation on scaling a database](single-database-scale.md) for more information.
By default, the cluster of nodes for the premium availability model is created i
Because the zone-redundant databases have replicas in different datacenters with some distance between them, the increased network latency may increase the commit time and thus impact the performance of some OLTP workloads. You can always return to the single-zone configuration by disabling the zone-redundancy setting. This process is an online operation similar to the regular service tier upgrade. At the end of the process, the database or pool is migrated from a zone-redundant ring to a single zone ring or vice versa. > [!IMPORTANT]
-> This feature is not available in SQL Managed Instance. In SQL Database, when using the Business Critical tier, zone-redundant configuration is only available when the Gen5 compute hardware is selected. For up to date information about the regions that support zone-redundant databases, see [Services support by region](../../availability-zones/az-region.md).
+> This feature is not available in SQL Managed Instance. In SQL Database, when using the Business Critical tier, zone-redundant configuration is only available when the Gen5 hardware is selected. For up to date information about the regions that support zone-redundant databases, see [Services support by region](../../availability-zones/az-region.md).
The zone-redundant version of the high availability architecture is illustrated by the following diagram: ![high availability architecture zone redundant](./media/high-availability-sla/zone-redundant-business-critical-service-tier.png)
-## Hyperscale service tier availability
+## Hyperscale service tier locally redundant availability
The Hyperscale service tier architecture is described in [Distributed functions architecture](./service-tier-hyperscale.md#distributed-functions-architecture) and is only currently available for SQL Database, not SQL Managed Instance.
Compute nodes in all Hyperscale layers run on Azure Service Fabric, which contro
For more information on high availability in Hyperscale, see [Database High Availability in Hyperscale](./service-tier-hyperscale.md#database-high-availability-in-hyperscale).
+## Hyperscale service tier zone redundant availability (Preview)
+
+Zone redundancy for the Azure SQL Database Hyperscale service tier is [now in public preview](https://aka.ms/zrhyperscale). Enabling this configuration ensures zone-level resiliency through replication across Availability Zones for all Hyperscale layers. By selecting zone-redundancy, you can make your Hyperscale databases resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the application logic.
+
+Consider the following limitations:
+
+- Currently, only the following Azure regions are supported: UK South, Brazil South, West US 2, Japan East, North Europe, and Southeast Asia.
+- Zone redundant configuration can only be specified during database creation. This setting cannot be modified once the resource is provisioned. Use [Database copy](database-copy.md), [point-in-time restore](recovery-using-backups.md#point-in-time-restore), or create a [geo-replica](active-geo-replication-overview.md) to update the zone redundant configuration for an existing Hyperscale database. When using one of these update options, if the target database is in a different region than the source or if the database backup storage redundancy from the target differs from the source database, the [copy operation](database-copy.md#database-copy-for-azure-sql-hyperscale) will be a size of data operation.
+- Named replicas are not supported.
+- Only [zone-redundant backup](automated-backups-overview.md) is supported.
+- Only Gen5 hardware is supported.
+- [Geo-Restore](recovery-using-backups.md#geo-restore) is not currently supported.
+- Zone redundancy cannot currently be specified when migrating an existing database from another Azure SQL Database service tier to Hyperscale.
+
+> [!IMPORTANT]
+> At least 1 high availability compute replica and the use of zone-redundant backup storage is required for enabling the zone redundant configuration for Hyperscale.
++
+### Create a zone redundant Hyperscale database
+
+Use [Azure PowerShell](/powershell/azure/install-az-ps) or the [Azure CLI](/cli/azure/update-azure-cli) to create a zone redundant Hyperscale database. Confirm you have the latest version of the API to ensure support for any recent changes.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Specify the `-ZoneRedundant` parameter to enable zone redundancy for your Hyperscale database by using Azure PowerShell. The database must have at least 1 high availability replica and zone-redundant backup storage must be specified.
+
+To enable zone redundancy using Azure Powershell, use the following example command:
+
+```powershell
+New-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "Server01" -DatabaseName "Database01" `
+ -Edition "Hyperscale" -HighAvailabilityReplicaCount 1 -ZoneRedundant -BackupStorageRedundancy Zone
+```
+++
+# [Azure CLI](#tab/azure-cli)
+
+Specify the `-zone-redundant parameter` to enable zone redundancy for your Hyperscale database by using the Azure CLI. The database copy must have at least 1 high availability replica and zone-redundant backup storage.
+
+To enable zone redundancy using the Azure CLI, use the following example command:
+
+```azurecli
+az sql db create -g mygroup -s myserver -n mydb -e Hyperscale -f Gen5 ΓÇôha-replicas 1 ΓÇô-zone-redundant -ΓÇôbackup-storage-redundancy Zone
+```
+
+* * *
+
+### Create a zone redundant Hyperscale database by creating a geo-replica
+
+To make an existing Hyperscale database zone redundant, use Azure PowerShell or the Azure CLI to create a zone redundant Hyperscale database using active geo-replication. The geo-replica can be in the same or different region as the existing Hyperscale database.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Specify the `-ZoneRedundant` parameter to enable zone redundancy for your Hyperscale database secondary. The secondary database must have at least 1 high availability replica and zone-redundant backup storage must be specified.
+
+To create your zone redundant database using Azure PowerShell, use the following example command:
+
+```powershell
+New-AzSqlDatabaseSecondary -ResourceGroupName "myResourceGroup" -ServerName $sourceserver -DatabaseName "databaseName" -PartnerResourceGroupName "myPartnerResourceGroup" -PartnerServerName $targetserver -PartnerDatabaseName "zoneRedundantCopyOfMySampleDatabaseΓÇ¥ -ZoneRedundant -BackupStorageRedundancy Zone -HighAvailabilityReplicaCount 1
+```
++
+# [Azure CLI](#tab/azure-cli)
+
+Specify the `-zone-redundant parameter` to enable zone redundancy for your Hyperscale database secondary. The secondary database must have at least 1 high availability replica and zone-redundant backup storage.
+
+To enable zone redundancy using the Azure CLI, use the following example command:
+
+```azurecli
+az sql db replica create -g mygroup -s myserver -n originalDb --partner-server newDb ΓÇôha-replicas 1 ΓÇôzone-redundant ΓÇôbackup-storage-redundancy Zone
+```
+
+* * *
## Accelerated Database Recovery (ADR)
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-vcore-single-databases.md
Previously updated : 01/18/2022 Last updated : 03/02/2022 # Resource limits for single databases using the vCore purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max concurrent workers|200|400|600|800|1000|1200|1400| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Secondary replicas|0-4|0-4|0-4|0-4|0-4|0-4|0-4|
-|Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A|N/A|
+|Multi-AZ|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|
|Read Scale-out|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|7 days| |||
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max concurrent workers|1600|1800|2000|2400|3200|4000|8000| |Max concurrent sessions|30,000|30,000|30,000|30,000|30,000|30,000|30,000| |Secondary replicas|0-4|0-4|0-4|0-4|0-4|0-4|0-4|
-|Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A|N/A|
+|Multi-AZ|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|
|Read Scale-out|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|7 days| |||
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-hyperscale.md
Previously updated : 2/2/2022 Last updated : 03/02/2022 # Hyperscale service tier
vCore resource limits are listed in the following articles, please be sure to up
The vCore-based service tiers are differentiated based on database availability and storage type, performance, and maximum storage size, as described in the following table:
-|| **General Purpose** | **Hyperscale** | **Business Critical** |
+|| **General Purpose** | **Hyperscale** | **Business Critical** |
|::|::|::|::|
-| **Best for** |Offers budget oriented balanced compute and storage options.|Most business workloads. Autoscaling storage size up to 100 TB,fast vertical and horizontal compute scaling, fast database restore.|OLTP applications with high transaction rate and low IO latency. Offers highest resilience to failures and fast failovers using multiple synchronously updated replicas.|
-| **Resource type** |SQL Database / SQL Managed Instance | Single database | SQL Database / SQL Managed Instance |
-| **Compute size** | 1 to 80 vCores | 1 to 80 vCores<sup>1</sup>| 1 to 80 vCores |
-| **Storage type** | Premium remote storage (per instance) | De-coupled storage with local SSD cache (per instance) | Super-fast local SSDstorage (per instance) |
+| **Best for** | Offers budget oriented balanced compute and storage options.|Most business workloads. Autoscaling storage size up to 100 TB,fast vertical and horizontal compute scaling, fast database restore.| OLTP applications with high transaction rate and low IO latency. Offers highest resilience to failures and fast failovers using multiple synchronously updated replicas.|
+| **Resource type** | SQL Database / SQL Managed Instance | Single database | SQL Database / SQL Managed Instance |
+| **Compute size** | 1 to 80 vCores | 1 to 80 vCores<sup>1</sup> | 1 to 80 vCores |
+| **Storage type** | Premium remote storage (per instance) | De-coupled storage with local SSD cache (per instance) | Super-fast local SSD storage (per instance)|
| **Storage size**<sup>1</sup> | 5 GB ΓÇô 4 TB | Up to 100 TB | 5 GB ΓÇô 4 TB |
-| **IOPS** | 500 IOPS per vCore with 7000 maximum IOPS | Hyperscale is a multi-tiered architecture with caching at multiplelevels. Effective IOPS will depend on the workload. | 5000 IOPS with 200,000 maximum IOPS|
-|**Availability**| 1 replica, no Read Scale-out, zone-redundant HA (preview), no local cache | Multiple replicas, up to 4 Read Scale-out, partiallocal cache | 3 replicas, 1 Read Scale-out, zone-redundant HA, full local storage |
-|**Backups** | A choice of geo-redundant, zone-redundant, or locally-redundant backup storage, 1-35 day retention (default 7 days) | A choice of geo-redundant, zone-redundant, or locally-redundant backup storage, 7 day retention. | A choice of geo-redundant,zone-redundant, or locally-redundant backup storage, 1-35 day retention (default 7 days) |
+| **IOPS** | 500 IOPS per vCore with 7000 maximum IOPS | Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPS will depend on the workload. | 5000 IOPS with 200,000 maximum IOPS |
+|**Availability** | 1 replica, no Read Scale-out, zone-redundant HA (preview), no local cache | Multiple replicas, up to 4 Read Scale-out, zone-redundant HA (preview), partial local cache | 3 replicas, 1 Read Scale-out, zone-redundant HA, full local storage |
+| **Backups** | A choice of geo-redundant, zone-redundant, or locally-redundant backup storage, 1-35 day retention (default 7 days) | A choice of geo-redundant, zone-redundant, or locally-redundant backup storage, 7 day retention. | A choice of geo-redundant,zone-redundant, or locally-redundant backup storage, 1-35 day retention (default 7 days) |
+|||||
<sup>1</sup> Elastic pools are not supported in the Hyperscale service tier. ++ ## Distributed functions architecture Unlike traditional database engines that have centralized all of the data management functions in one location/process (even so called distributed databases in production today have multiple copies of a monolithic data engine), a Hyperscale database separates the query processing engine, where the semantics of various data engines diverge, from the components that provide long-term storage and durability for the data. In this way, the storage capacity can be smoothly scaled out as far as needed (initial target is 100 TB). High-availability and named replicas share the same storage components so no data copy is required to spin up a new replica.
GO
## Database high availability in Hyperscale
-As in all other service tiers, Hyperscale guarantees data durability for committed transactions regardless of compute replica availability. The extent of downtime due to the primary replica becoming unavailable depends on the type of failover (planned vs. unplanned), and on the presence of at least one high-availability replica. In a planned failover (i.e. a maintenance event), the system either creates the new primary replica before initiating a failover, or uses an existing high-availability replica as the failover target. In an unplanned failover (i.e. a hardware failure on the primary replica), the system uses a high-availability replica as a failover target if one exists, or creates a new primary replica from the pool of available compute capacity. In the latter case, downtime duration is longer due to extra steps required to create the new primary replica.
+As in all other service tiers, Hyperscale guarantees data durability for committed transactions regardless of compute replica availability. The extent of downtime due to the primary replica becoming unavailable depends on the type of failover (planned vs. unplanned), [whether zone redundancy is configured](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview), and on the presence of at least one high-availability replica. In a planned failover (i.e. a maintenance event), the system either creates the new primary replica before initiating a failover, or uses an existing high-availability replica as the failover target. In an unplanned failover (i.e. a hardware failure on the primary replica), the system uses a high-availability replica as a failover target if one exists, or creates a new primary replica from the pool of available compute capacity. In the latter case, downtime duration is longer due to extra steps required to create the new primary replica.
For Hyperscale SLA, see [SLA for Azure SQL Database](https://azure.microsoft.com/support/legal/sla/azure-sql-database).
azure-sql Service Tiers Sql Database Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tiers-sql-database-vcore.md
Previously updated : 02/02/2022 Last updated : 03/02/2022 # vCore purchasing model - Azure SQL Database
For greater details, review resource limits for [logical server](resource-limits
|**Use case**|**General Purpose**|**Business Critical**|**Hyperscale**| ||||| |**Best for**|Most business workloads. Offers budget-oriented, balanced, and scalable compute and storage options. |Offers business applications the highest resilience to failures by using several isolated replicas, and provides the highest I/O performance per database replica.|Most business workloads with highly scalable storage and read-scale requirements. Offers higher resilience to failures by allowing configuration of more than one isolated database replica. |
-|**Availability**|1 replica, no read-scale replicas, <br/>zone-redundant high availability (HA) (preview)|3 replicas, 1 [read-scale replica](read-scale-out.md),<br/>zone-redundant high availability (HA)|
+|**Availability**|1 replica, no read-scale replicas, <br/>zone-redundant high availability (HA) (preview)|3 replicas, 1 [read-scale replica](read-scale-out.md),<br/>zone-redundant high availability (HA)|zone-redundant high availability (HA) (preview)|
|**Pricing/billing** | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. |[vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. | [vCore for each replica and used storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS not yet charged. | |**Discount models**| [Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions | [Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions| | | |
azure-sql Data Virtualization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/data-virtualization-overview.md
+
+ Title: Data virtualization
+
+description: Learn about data virtualization capabilities of Azure SQL Managed Instance
++++
+ms.devlang:
++++ Last updated : 03/02/2022++
+# Data virtualization with Azure SQL Managed Instance (Preview)
+
+Azure SQL Managed Instance enables you to execute T-SQL queries that read data from files stored in Azure Data Lake Storage Gen2 or Azure Blob Storage, and to combine it in queries with locally stored relational data via joins. This way you can transparently access external data still allowing it to stay in its original format and location using the concept of data virtualization.
+
+## Overview
+
+There are two ways of querying external files, intended for different scenarios:
+
+- OPENROWSET syntax ΓÇô optimized for ad-hoc querying of files. Typically used to quickly explore the content and the structure of a new set of files.
+- External tables ΓÇô optimized for repetitive querying of files using identical syntax as if data were stored locally in the database. It requires few more preparation steps compared to the first option, but it allows more control over data access. ItΓÇÖs typically used in analytical workloads and for reporting.
+
+File formats directly supported are parquet and delimited text (CSV). JSON file format is supported indirectly by specifying CSV file format and queries returning every document as a separate row. Rows can be further parsed using JSON_VALUE and OPENJSON.
+
+Location of the file(s) to be queried needs to be provided in a specific format, with location prefix corresponding to the type of the external source and endpoint/protocol used:
+
+```sql
+--Blob Storage endpoint
+abs://<container>@<storage_account>.blob.core.windows.net/<path>/<file_name>.parquet
+
+--Data Lake endpoint
+adls://<container>@<storage_account>.dfs.core.windows.net/<path>/<file_name>.parquet
+```
+
+> [!IMPORTANT]
+> Usage of the generic https:// prefix is discouraged and will be disabled in the future. Make sure you use endpoint-specific prefixes to avoid interruptions.
+
+The feature needs to be explicitly enabled before using it. Run the following commands to enable the data virtualization capabilities:
+
+```sql
+exec sp_configure 'polybase_enabled', 1;
+go
+reconfigure;
+go
+```
+
+If you're new to the data virtualization and want to quickly test functionality, start from querying publicly available data sets available in [Azure Open Datasets](https://docs.microsoft.com/azure/open-datasets/dataset-catalog), like the [Bing COVID-19 dataset](https://docs.microsoft.com/azure/open-datasets/dataset-bing-covid-19?tabs=azure-storage) allowing anonymous access:
+
+- Bing COVID-19 dataset - parquet: abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.parquet
+- Bing COVID-19 dataset - CSV: abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.csv
+
+Once you have first queries executing successfully, you may want to switch to private data sets that require configuring specific access rights or firewall rules.
+
+To access a private location, you need to authenticate to the storage account using Shared Access Signature (SAS) key with proper access permissions and validity period. The SAS key isn't provided directly in each query. It's used for creation of a database-scoped credential, which is in turn provided as a parameter of an External Data Source.
+
+All the concepts outlined so far are described in detail in the following sections.
+
+## External data source
+
+External Data Source is an abstraction intended for easier management of file locations across multiple queries and for referencing authentication parameters encapsulated in database-scoped credential.
+
+Public locations are described in an external data source by providing the file location path:
+
+```sql
+CREATE EXTERNAL DATA SOURCE DemoPublicExternalDataSource
+WITH (
+ LOCATION = 'abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-19_data/latest'
+-- LOCATION = 'abs://<container>@<storage_account>.blob.core.windows.net/<path>'
+)
+```
+
+Private locations beside path require also reference to a credential to be provided:
+
+```sql
+-- Step0 (optional): Create master key if it doesn't exist in the database:
+-- CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<Put Some Very Strong Password Here>'
+-- GO
+
+--Step1: Create database-scoped credential (requires database master key to exist):
+CREATE DATABASE SCOPED CREDENTIAL [DemoCredential]
+WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
+SECRET = '<your SAS key without leading "?" mark>';
+GO
+
+--Step2: Create external data source pointing to the file path, and referencing database-scoped credential:
+CREATE EXTERNAL DATA SOURCE DemoPrivateExternalDataSource
+WITH (
+ LOCATION = 'abs://<container>@<storage_account>.blob.core.windows.net/<path>',
+ CREDENTIAL = [DemoCredential]
+)
+```
+
+## Query data sources using OPENROWSET
+[OPENROWSET](https://docs.microsoft.com/sql/t-sql/functions/openrowset-transact-sql) syntax enables instant and ad-hoc querying with minimal required database objects created. DATA_SOURCE parameter value is automatically prepended to the BULK parameter to form full path to the file. Format of the file also needs to be provided:
+
+```sql
+SELECT TOP 10 *
+FROM OPENROWSET(
+ BULK 'bing_covid-19_data.parquet',
+ DATA_SOURCE = 'DemoPublicExternalDataSource',
+ FORMAT = 'parquet'
+) AS filerows
+```
+
+### Querying multiple files and folders
+While in the previous example OPENROWSET command queried a single file, it can also query multiple files or folders by using wildcards in the BULK path.
+Here's an example using [NYC yellow taxi trip records open data set](https://docs.microsoft.com/azure/open-datasets/dataset-taxi-yellow):
+
+```sql
+--Query all files with .parquet extension in folders matching name pattern:
+SELECT TOP 10 *
+FROM OPENROWSET(
+ BULK 'taxi/year=*/month=*/*.parquet',
+ DATA_SOURCE = 'NYCTaxiDemoDataSource',--You need to create the data source first
+ FORMAT = 'parquet'
+) AS filerows
+ ```
+When you're querying multiple files or folders, all files accessed with the single OPENROWSET must have the same structure, that is, number of columns and their data types. Folders can't be traversed recursively.
+
+### Schema inference
+The automatic schema inference helps you quickly write queries and explore data without knowing file schemas, as seen in previous sample scripts.
+
+The cost of the convenience is that inferred data types may be larger than the actual data types, affecting the performance of queries. This happens when there's no enough information in the source files to make sure the appropriate data type is used. For example, parquet files don't contain metadata about maximum character column length, so instance infers it as varchar(8000).
+
+> [!NOTE]
+> Schema inference works only with files in the parquet format.
+
+You can use sp_describe_first_results_set stored procedure to check the resulting data types of your query:
+```sql
+EXEC sp_describe_first_result_set N'
+ SELECT
+ vendor_id, pickup_datetime, passenger_count
+ FROM
+ OPENROWSET(
+ BULK ''taxi/*/*/*'',
+ DATA_SOURCE = ''NYCTaxiDemoDataSource'',
+ FORMAT=''parquet''
+ ) AS nyc';
+ ```
+
+Once you know the data types, you can specify them using WITH clause to improve the performance:
+```sql
+SELECT TOP 100
+ vendor_id, pickup_datetime, passenger_count
+FROM
+OPENROWSET(
+ BULK 'taxi/*/*/*',
+ DATA_SOURCE = 'NYCTaxiDemoDataSource',
+ FORMAT='PARQUET'
+ )
+WITH (
+vendor_id varchar(4), -- we're using length of 4 instead of the inferred 8000
+pickup_datetime datetime2,
+passenger_count int
+) AS nyc;
+```
+
+For CSV files the schema canΓÇÖt be automatically determined, and you need to explicitly specify columns using WITH clause:
+
+```sql
+SELECT TOP 10 *
+FROM OPENROWSET(
+ BULK 'population/population.csv',
+ DATA_SOURCE = 'PopulationDemoDataSourceCSV',
+ FORMAT = 'CSV')
+WITH (
+ [country_code] VARCHAR (5) COLLATE Latin1_General_BIN2,
+ [country_name] VARCHAR (100) COLLATE Latin1_General_BIN2,
+ [year] smallint,
+ [population] bigint
+) AS filerows
+```
+
+### File metadata functions
+When querying multiple files or folders, you can use Filepath and Filename functions to read file metadata and get part of the path or full path and name of the file that the row in the result set originates from:
+```sql
+--Query all files and project file path and file name information for each row:
+SELECT TOP 10 filerows.filepath(1) as [Year_Folder], filerows.filepath(2) as [Month_Folder],
+filerows.filename() as [File_name], filerows.filepath() as [Full_Path], *
+FROM OPENROWSET(
+ BULK 'taxi/year=*/month=*/*.parquet',
+ DATA_SOURCE = 'NYCTaxiDemoDataSource',
+ FORMAT = 'parquet') AS filerows
+--List all paths:
+SELECT DISTINCT filerows.filepath(1) as [Year_Folder], filerows.filepath(2) as [Month_Folder]
+FROM OPENROWSET(
+ BULK 'taxi/year=*/month=*/*.parquet',
+ DATA_SOURCE = 'NYCTaxiDemoDataSource',
+ FORMAT = 'parquet') AS filerows
+```
+
+When called without a parameter, filepath function returns the file path that the row originates from. When DATA_SOURCE is used in OPENROWSET, it returns path relative to DATA_SOURCE, otherwise it returns full file path.
+
+When called with a parameter, it returns part of the path that matches the wildcard on the position specified in the parameter. For example, parameter value 1 would return part of the path that matches the first wildcard.
+
+Filepath function can also be used for filtering and aggregating rows:
+```sql
+SELECT
+ r.filepath() AS filepath
+ ,r.filepath(1) AS [year]
+ ,r.filepath(2) AS [month]
+ ,COUNT_BIG(*) AS [rows]
+FROM OPENROWSET(
+ BULK 'taxi/year=*/month=*/*.parquet',
+DATA_SOURCE = 'NYCTaxiDemoDataSource',
+FORMAT = 'parquet'
+ ) AS r
+WHERE
+ r.filepath(1) IN ('2017')
+ AND r.filepath(2) IN ('10', '11', '12')
+GROUP BY
+ r.filepath()
+ ,r.filepath(1)
+ ,r.filepath(2)
+ORDER BY
+ filepath;
+```
+
+### Creating view on top of OPENROWSET
+You can create and use views to wrap OPENROWSET for easy reusing of underlying query:
+```sql
+CREATE VIEW TaxiRides AS
+SELECT *
+FROM OPENROWSET(
+ BULK 'taxi/year=*/month=*/*.parquet',
+ DATA_SOURCE = 'NYCTaxiDemoDataSource',
+ FORMAT = 'parquet'
+) AS filerows
+```
+
+ItΓÇÖs also convenient to add columns with file location data to a view, using filepath function for easier and more performant filtering. It can reduce the number of files and the amount of data the query on top of the view needs to read and process when filtered by any of those columns:
+```sql
+CREATE VIEW TaxiRides AS
+SELECT *
+ ,filerows.filepath(1) AS [year]
+ ,filerows.filepath(2) AS [month]
+FROM OPENROWSET(
+ BULK 'taxi/year=*/month=*/*.parquet',
+ DATA_SOURCE = 'NYCTaxiDemoDataSource',
+ FORMAT = 'parquet'
+) AS filerows
+```
+
+Views also enable reporting and analytic tools like Power BI to consume results of OPENROWSET.
+
+## External tables
+External tables encapsulate access to the files making the querying experience almost identical to querying local relational data stored in user tables. Creation of an external table requires external data source and external file format objects to exist:
+
+```sql
+--Create external file format
+CREATE EXTERNAL FILE FORMAT DemoFileFormat
+WITH (
+ FORMAT_TYPE=PARQUET
+)
+GO
+
+--Create external table:
+CREATE EXTERNAL TABLE tbl_TaxiRides(
+ vendor_id VARCHAR(100) COLLATE Latin1_General_BIN2,
+ pickup_datetime DATETIME2,
+ dropoff_datetime DATETIME2,
+ passenger_count INT,
+ trip_distance FLOAT,
+ fare_amount FLOAT,
+ extra FLOAT,
+ mta_tax FLOAT,
+ tip_amount FLOAT,
+ tolls_amount FLOAT,
+ improvement_surcharge FLOAT,
+ total_amount FLOAT
+)
+WITH (
+ LOCATION = 'taxi/year=*/month=*/*.parquet',
+ DATA_SOURCE = DemoDataSource,
+ FILE_FORMAT = DemoFileFormat
+);
+GO
+```
+
+Once external table is created, you can query it just like any other table:
+```sql
+SELECT TOP 10 *
+FROM tbl_TaxiRides
+```
+
+Just like OPENROWSET, external tables allow querying multiple files and folders by using wildcards. Schema inference and filepath/filename functions aren't supported with external tables.
+
+## Performance considerations
+There's no hard limit in terms of number of files or amount of data that can be queried, but query performance will depend on the amount of data, data format, and complexity of queries and joins.
+
+Collecting statistics on your external data is one of the most important things you can do for query optimization. The more instance knows about your data, the faster it can execute queries.
+
+### OPENROWSET statistics
+Single-column statistics for OPENROWSET path can be created using sp_create_openrowset_statistics
+stored procedure, by passing the select query with a single column as a parameter:
+```sql
+EXEC sys.sp_create_openrowset_statistics N'
+SELECT pickup_datetime
+FROM OPENROWSET(
+ BULK ''abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-19_data/latest/*.parquet'',
+ FORMAT = ''parquet'') AS filerows
+'
+```
+
+By default instance uses 100% of the data provided in the dataset for creating statistics. You can optionally specify sample size as a percentage using TABLESAMPLE options. To create single-column statistics for multiple columns, you should execute stored procedure for each of the columns. You canΓÇÖt create multi-column statistics for OPENROWSET path.
+
+To update existing statistics, drop them first using sp_drop_openrowset_statistics stored procedure, and then recreate them:
+```sql
+EXEC sys.sp_drop_openrowset_statistics N'
+SELECT pickup_datetime
+FROM OPENROWSET(
+ BULK ''abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-19_data/latest/*.parquet'',
+ FORMAT = ''parquet'') AS filerows
+'
+```
+
+### External table statistics
+Syntax for creating stats on external tables resembles the one used for ordinary user tables. To create statistics on a column, provide a name for the statistics object and the name of the column:
+```sql
+CREATE STATISTICS sVendor
+ON tbl_TaxiRides (vendor_id)
+WITH FULLSCAN, NORECOMPUTE
+```
+
+Provided WITH options are mandatory, and for the sample size allowed options are FULLSCAN and SAMPLE n percent.
+To create single-column statistics for multiple columns, execute stored procedure for each of the columns. You canΓÇÖt create multi-column statistics.
+
+## Next steps
+
+- To learn more about syntax options available with OPENROWSET, see [OPENROWSET T-SQL](https://docs.microsoft.com/sql/t-sql/functions/openrowset-transact-sql).
+- For more information about creating external table in SQL Managed Instance, see [CREATE EXTERNAL TABLE](https://docs.microsoft.com/sql/t-sql/statements/create-external-table-transact-sql).
+- To learn more about creating external file format, see [CREATE EXTERNAL FILE FORMAT](https://docs.microsoft.com/sql/t-sql/statements/create-external-file-format-transact-sql)
azure-sql Sql Assessment For Sql Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/sql-assessment-for-sql-vm.md
Title: SQL Assessment
-description: "Identify performance issues and assess that your SQL Server VM is configured to follow best practices by using the SQL Assessments feature in the Azure portal."
+ Title: SQL best practices assessment
+description: "Identify performance issues and assess that your SQL Server VM is configured to follow best practices by using the SQL best practices assessment feature in the Azure portal."
-# SQL Assessment for SQL Server on Azure VMs (Preview)
+# SQL best practices assessment for SQL Server on Azure VMs
[!INCLUDE[appliesto-sqlvm](../../includes/appliesto-sqlvm.md)]
-The SQL Assessment feature of the Azure portal identifies possible performance issues and evaluates that your SQL Server on Azure Virtual Machines (VMs) is configured to follow best practices using the [rich ruleset](https://github.com/microsoft/sql-server-samples/blob/master/samples/manage/sql-assessment-api/DefaultRuleset.csv) provided by the [SQL Assessment API](/sql/sql-assessment-api/sql-assessment-api-overview).
+The SQL best practices assessment feature of the Azure portal identifies possible performance issues and evaluates that your SQL Server on Azure Virtual Machines (VMs) is configured to follow best practices using the [rich ruleset](https://github.com/microsoft/sql-server-samples/blob/master/samples/manage/sql-assessment-api/DefaultRuleset.csv) provided by the [SQL Assessment API](/sql/sql-assessment-api/sql-assessment-api-overview).
-The SQL Assessment feature is currently in preview.
-To learn more, watch this video on [SQL Assessment](/shows/Data-Exposed/?WT.mc_id=dataexposed-c9-niner):
+To learn more, watch this video on [SQL best practices assessment](/shows/Data-Exposed/?WT.mc_id=dataexposed-c9-niner):
<iframe src="https://aka.ms/docs/player?id=13b2bf63-485c-4ec2-ab14-a1217734ad9f" width="640" height="370" style="border: 0; max-width: 100%; min-width: 100%;"></iframe> ## Overview
-Once the SQL Assessment feature is enabled, your SQL Server instance and databases are scanned to provide recommendations for things like indexes, deprecated features, enabled or missing trace flags, statistics, etc. Recommendations are surfaced to the [SQL VM management page](manage-sql-vm-portal.md) of the [Azure portal](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines).
+Once the SQL best practices assessment feature is enabled, your SQL Server instance and databases are scanned to provide recommendations for things like indexes, deprecated features, enabled or missing trace flags, statistics, etc. Recommendations are surfaced to the [SQL VM management page](manage-sql-vm-portal.md) of the [Azure portal](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines).
-Assessment results are uploaded to your [Log Analytics workspace](../../../azure-monitor/logs/quick-create-workspace.md) using [Microsoft Monitoring Agent (MMA)](../../../azure-monitor/agents/log-analytics-agent.md). If your VM is already configured to use Log Analytics, the SQL Assessment feature uses the existing connection. Otherwise, the MMA extension is installed to the SQL Server VM and connected to the specified Log Analytics workspace.
+Assessment results are uploaded to your [Log Analytics workspace](../../../azure-monitor/logs/quick-create-workspace.md) using [Microsoft Monitoring Agent (MMA)](../../../azure-monitor/agents/log-analytics-agent.md). If your VM is already configured to use Log Analytics, the SQL best practices assessment feature uses the existing connection. Otherwise, the MMA extension is installed to the SQL Server VM and connected to the specified Log Analytics workspace.
-Assessment run time depends on your environment (number of databases, objects, and so on), with a duration from a few minutes, up to an hour. Similarly, the size of the assessment result also depends on your environment. Assessment runs against your instance and all databases on that instance.
+Assessment run time depends on your environment (number of databases, objects, and so on), with a duration from a few minutes, up to an hour. Similarly, the size of the assessment result also depends on your environment. Assessment runs against your instance and all databases on that instance. In our testing, we observed that an assessment run can have up to 5-10% CPU impact on the machine. In these tests, the assessment was done while a TPC-C like application was running against the SQL Server.
## Prerequisites
-To use the SQL Assessment feature, you must have the following prerequisites:
+To use the SQL best practices assessment feature, you must have the following prerequisites:
- Your SQL Server VM must be registered with the [SQL Server IaaS extension in full mode](sql-agent-extension-manually-register-single-vm.md#full-mode). - A [Log Analytics workspace](../../../azure-monitor/logs/quick-create-workspace.md) in the same subscription as your SQL Server VM to upload assessment results to.
To use the SQL Assessment feature, you must have the following prerequisites:
## Enable
-To enable SQL Assessments, follow these steps:
+To enable SQL best practices assessments, follow these steps:
1. Sign into the [Azure portal](https://portal.azure.com) and go to your [SQL Server VM resource](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines).
-1. Select **SQL Assessments** under **Settings**.
-1. Select **Enable SQL Assessments** or **Configuration** to navigate to the **Configuration** page.
-1. Check the **Enable SQL Assessments** box and provide the following:
+1. Select **SQL best practices assessments** under **Settings**.
+1. Select **Enable SQL best practices assessments** or **Configuration** to navigate to the **Configuration** page.
+1. Check the **Enable SQL best practices assessments** box and provide the following:
1. The [Log Analytics workspace](../../../azure-monitor/logs/quick-create-workspace.md) that assessments will be uploaded to. If the SQL Server VM has not been associated with a workspace previously, then choose an existing workspace in the subscription from the drop-down. Otherwise, the previously-associated workspace is already populated. 1. The **Run schedule**. You can choose to run assessments on demand, or automatically on a schedule. If you choose a schedule, then provide the frequency (weekly or monthly), day of week, recurrence (every 1-6 weeks), and the time of day your assessments should start (local to VM time).
-1. Select **Apply** to save your changes and deploy the Microsoft Monitoring Agent to your SQL Server VM if it's not deployed already. An Azure portal notification will tell you once the SQL Assessment feature is ready for your SQL Server VM.
+1. Select **Apply** to save your changes and deploy the Microsoft Monitoring Agent to your SQL Server VM if it's not deployed already. An Azure portal notification will tell you once the SQL best practices assessment feature is ready for your SQL Server VM.
## Assess SQL Server VM
If you set a schedule in the configuration blade, an assessment runs automatical
### Run on demand assessment
-After the SQL Assessment feature is enabled for your SQL Server VM, it's possible to run an assessment on demand. To do so, select **Run assessment** from the SQL Assessment blade of the [Azure portal SQL Server VM resource](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines) page.
+After the SQL best practices assessment feature is enabled for your SQL Server VM, it's possible to run an assessment on demand. To do so, select **Run assessment** from the SQL best practices assessment blade of the [Azure portal SQL Server VM resource](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines) page.
## View results
-The **Assessments results** section of the **SQL Assessments** page shows a list of the most recent assessment runs. Each row displays the start time of a run and the status - scheduled, running, uploading results, completed, or failed. Each assessment run has two parts: evaluates your instance, and uploads the results to your Log Analytics workspace. The status field covers both parts. Assessment results are shown in Azure workbooks.
+The **Assessments results** section of the **SQL best practices assessments** page shows a list of the most recent assessment runs. Each row displays the start time of a run and the status - scheduled, running, uploading results, completed, or failed. Each assessment run has two parts: evaluates your instance, and uploads the results to your Log Analytics workspace. The status field covers both parts. Assessment results are shown in Azure workbooks.
Access the assessment results Azure workbook in three ways: -- Select the **View latest successful assessment button** on the **SQL Assessments** page.-- Choose a completed run from the **Assessment results** section of the **SQL Assessments** page.
+- Select the **View latest successful assessment button** on the **SQL best practices assessments** page.
+- Choose a completed run from the **Assessment results** section of the **SQL best practices assessments** page.
- Select **View assessment results** from the **Top 10 recommendations** surfaced on the **Overview** page of your SQL VM resource page. Once you have the workbook open, you can use the drop-down to select previous runs. You can view the results of a single run using the **Results** page or review historical trends using the **Trends** page.
Once you have the workbook open, you can use the drop-down to select previous ru
The **Results** page organizes the recommendations using tabs for *All, new, resolved*. Use these tabs to view all recommendations from the current run, all the new recommendations (the delta from previous runs), or resolved recommendations from previous runs. Tabs help you track progress between runs. The *Insights* tab identifies the most recurring issues and the databases with the most issues. Use these to decide where to concentrate your efforts.
-The graph groups assessment results in different categories of severity - high, medium, and low. Select each category to see the list of recommendations, or search for key phrases in the search box. It's best to start with the most severe recommendations and go down the list.
+The graph groups assessment results in different categories of severity - high, medium, low, and information. Select each category to see the list of recommendations, or search for key phrases in the search box. It's best to start with the most severe recommendations and go down the list.
-Sort by **Name** in the table to view recommendations grouped by a particular database or instance. Use the search box to view certain types of recommendations based on the tag value or key phrase, such as performance. Use the down arrow at the top-right of the table to expert results to an excel file.
+The first grid shows you each recommendation and the number of instances your environment hit that issue. When you select a row in the first grid, the second grid lists all the instances for that particular recommendation. If there is no selection in the first grid, the second grid shows all recommendations. Potentially this could be a big list. You can use the drop downs above the grid (**Name, Severity, Tags, Check Id**) to filter the results. You can also use **Export to Excel** and **Open the last run query in the Logs view** options by selecting the small icons on the top right corner of each grid.
The **passed** section of the graph identifies recommendations your system already follows.
If there are multiple runs in a single day, only the latest run is included in t
## Known Issues
-You may encounter some of the following known issues when using SQL assessments.
+You may encounter some of the following known issues when using SQL best practices assessments.
-### Configuration error for Enable Assessment
+### Configuration error for Enable SQL best practices assessment
-If your virtual machine is already associated with a Log Analytics workspace that you don't have access to or that is in another subscription, you will see an error in the configuration blade. For the former, you can either obtain permissions for that workspace or switch your VM to a different Log Analytics workspace by following [these instructions](../../../azure-monitor/agents/agent-manage.md) to remove Microsoft Monitoring Agent. We are working on enabling the scenario where the Log Analytics workspace is in another subscription.
+If your virtual machine is already associated with a Log Analytics workspace that you don't have access to or that is in another subscription, you will see an error in the configuration blade. For the former, you can either obtain permissions for that workspace or switch your VM to a different Log Analytics workspace by following [these instructions](../../../azure-monitor/agents/agent-manage.md) to remove Microsoft Monitoring Agent.
### Deployment failure for Enable or Run Assessment
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
Title: Azure Video Analyzer for Media (formerly Video Indexer) release notes | M
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Analyzer for Media (formerly Video Indexer). Previously updated : 02/18/2022 Last updated : 03/01/2022
To stay up-to-date with the most recent Azure Video Analyzer for Media (former V
## February 2022
+### Public preview of Video Analyzer for Media account management based on ARM in Government cloud
+
+Video Analyzer for Media website is now supporting account management based on ARM in public preview (see, [November 2021 release note](#november-2021)).
+ ### Leverage open-source code to create ARM based account Added new code samples including HTTP calls to use Video Analyzer for Media create, read, update and delete (CRUD) ARM API for solution developers. See [this sample](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/ARM-Samples/Create-Account ).+ ## January 2022 ### Improved audio effects detection
bastion Bastion Create Host Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-create-host-powershell.md
Title: 'Create a Bastion host using Azure PowerShell | Microsoft Docs'
-description: Learn how to create an Azure Bastion host using PowerShell.
-
+ Title: 'Deploy Bastion:PowerShell'
+
+description: Learn how to deploy Azure Bastion using PowerShell.
Previously updated : 09/22/2021 Last updated : 03/01/2022
-# Customer intent: As someone with a networking background, I want to create an Azure Bastion host.
+# Customer intent: As someone with a networking background, I want to deploy Bastion and connect to a VM.
-# Create an Azure Bastion host using Azure PowerShell
+# Deploy Bastion using Azure PowerShell
-This article shows you how to create an Azure Bastion host using PowerShell. Once you provision the Azure Bastion service in your virtual network, the seamless RDP/SSH experience is available to all of the VMs in the same virtual network. Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine.
+This article shows you how to deploy Azure Bastion using PowerShell. Azure Bastion is a PaaS service that's maintained for you, not a bastion host that you install on your VM and maintain yourself. An Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+
+Once you deploy Bastion to your virtual network, you can connect to your VMs via private IP address. This seamless RDP/SSH experience is available to all the VMs in the same virtual network. If your VM has a public IP address that you don't need for anything else, you can remove it.
+
+You can also deploy Bastion by using the following other methods:
-Optionally, you can create an Azure Bastion host by using the following methods:
* [Azure portal](./tutorial-create-host-portal.md) * [Azure CLI](create-host-cli.md)-
+* [Quickstart - deploy with default settings](quickstart-host-portal.md)
## Prerequisites
+### Azure subscription
+ Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial).
+### Azure PowerShell
+ [!INCLUDE [PowerShell](../../includes/vpn-gateway-cloud-shell-powershell-about.md)]
- > [!NOTE]
- > The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
- >
+> [!NOTE]
+> The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+>
-## <a name="createhost"></a>Create a bastion host
+## <a name="createhost"></a>Deploy Bastion
-This section helps you create a new Azure Bastion resource using Azure PowerShell.
+This section helps you deploy Azure Bastion using Azure PowerShell.
1. Create a virtual network and an Azure Bastion subnet. You must create the Azure Bastion subnet using the name value **AzureBastionSubnet**. This value lets Azure know which subnet to deploy the Bastion resources to. This is different than a VPN gateway subnet.
This section helps you create a new Azure Bastion resource using Azure PowerShel
$vnet = New-AzVirtualNetwork -Name "myVnet" -ResourceGroupName "myBastionRG" -Location "westeurope" -AddressPrefix 10.0.0.0/16 -Subnet $subnet ```
-2. Create a public IP address for Azure Bastion. The public IP is the public IP address the Bastion resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you are creating.
+1. Create a public IP address for Azure Bastion. The public IP is the public IP address the Bastion resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating.
+
+ The following example uses the **Standard SKU**. The Standard SKU lets you configure more Bastion features and connect to VMs using more connection types. For more information, see [Bastion SKUs](configuration-settings.md#skus).
```azurepowershell-interactive $publicip = New-AzPublicIpAddress -ResourceGroupName "myBastionRG" -name "myPublicIP" -location "westeurope" -AllocationMethod Static -Sku Standard ```
-3. Create a new Azure Bastion resource in the AzureBastionSubnet of your virtual network. It takes about 5 minutes for the Bastion resource to create and deploy.
+1. Create a new Azure Bastion resource in the AzureBastionSubnet of your virtual network. It takes about 10 minutes for the Bastion resource to create and deploy.
```azurepowershell-interactive $bastion = New-AzBastion -ResourceGroupName "myBastionRG" -Name "myBastion" -PublicIpAddress $publicip -VirtualNetwork $vnet ```
-## Disassociate the VM public IP address
-Azure Bastion does not use the public IP address to connect to the client VM. If you do not need the public IP address for your VM, you can disassociate the public IP address by using the steps in this article: [Dissociate a public IP address from an Azure VM](../virtual-network/ip-services/remove-public-ip-address-vm.md).
+## <a name="connect"></a>Connect to a VM
+
+You can use any of the following articles to connect to a VM that's located in the virtual network to which you deployed Bastion. You can also use the [Connection steps](#steps) in the section below. Some connection types require the [Standard SKU](configuration-settings.md#skus).
++
+### <a name="steps"></a>Connection steps
++
+## <a name="ip"></a>Remove VM public IP address
+
+Azure Bastion doesn't use the public IP address to connect to the client VM. If you don't need the public IP address for your VM, you can disassociate the public IP address. See [Dissociate a public IP address from an Azure VM](../virtual-network/ip-services/remove-public-ip-address-vm.md).
## Next steps
-* Read the [Bastion FAQ](bastion-faq.md) for additional information.
* To use Network Security Groups with the Azure Bastion subnet, see [Work with NSGs](bastion-nsg.md).
+* To understand VNet peering, see [VNet peering and Azure Bastion](vnet-peering.md).
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
Title: Azure Bastion FAQ | Microsoft Docs
+ Title: 'Azure Bastion FAQ'
description: Learn about frequently asked questions for Azure Bastion.- - Previously updated : 09/07/2021 Last updated : 03/02/2022 # Azure Bastion FAQ
-## FAQs
+## <a name="host"></a>Bastion
-### <a name="publicip"></a>Do I need a public IP on my virtual machine to connect via Azure Bastion?
+### <a name="browsers"></a>Which browsers are supported?
-No. When you connect to a VM using Azure Bastion, you don't need a public IP on the Azure virtual machine that you are connecting to. The Bastion service will open the RDP/SSH session/connection to your virtual machine over the private IP of your virtual machine, within your virtual network.
+The browser must support HTML 5. Use the Microsoft Edge browser or Google Chrome on Windows. For Apple Mac, use Google Chrome browser. Microsoft Edge Chromium is also supported on both Windows and Mac, respectively.
+
+### <a name="pricingpage"></a>What is the pricing?
+
+For more information, see the [pricing page](https://aka.ms/BastionHostPricing).
### Is IPv6 supported?
-At this time, IPv6 is not supported. Azure Bastion supports IPv4 only. This means that you can only assign an IPv4 public IP address to your Bastion resource, and that you can use your Bastion to connect to IPv4 target VMs. You can also use your Bastion to connect to dual-stack target VMs, but you will only be able to send and receive IPv4 traffic via Azure Bastion.
+At this time, IPv6 isn't supported. Azure Bastion supports IPv4 only. This means that you can only assign an IPv4 public IP address to your Bastion resource, and that you can use your Bastion to connect to IPv4 target VMs. You can also use your Bastion to connect to dual-stack target VMs, but you'll only be able to send and receive IPv4 traffic via Azure Bastion.
+
+### <a name="data"></a>Where does Azure Bastion store customer data?
+
+Azure Bastion doesn't move or store customer data out of the region it's deployed in.
### Can I use Azure Bastion with Azure Private DNS Zones?
-Azure Bastion needs to be able to communicate with certain internal endpoints to successfully connect to target resources. Therefore, you *can* use Azure Bastion with Azure Private DNS Zones as long as the zone name you select does not overlap with the naming of these internal endpoints. Before you deploy your Azure Bastion resource, please make sure that the host virtual network is not linked to a private DNS zone with the following in the name:
+Azure Bastion needs to be able to communicate with certain internal endpoints to successfully connect to target resources. Therefore, you *can* use Azure Bastion with Azure Private DNS Zones as long as the zone name you select doesn't overlap with the naming of these internal endpoints. Before you deploy your Azure Bastion resource, please make sure that the host virtual network isn't linked to a private DNS zone with the following in the name:
+ * core.windows.net * azure.com
-* vault.azure.net
+* vault.azure.net
-Note that if you are using a Private endpoint integrated Azure Private DNS Zone, the [recommended DNS zone name](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for several Azure services overlap with the names listed above. The use of Azure Bastion is *not* supported with these setups.
+If you are using a Private endpoint integrated Azure Private DNS Zone, the [recommended DNS zone name](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for several Azure services overlap with the names listed above. The use of Azure Bastion *is not* supported with these setups.
The use of Azure Bastion is also not supported with Azure Private DNS Zones in national clouds.
-### <a name="rdpssh"></a>Do I need an RDP or SSH client?
+### <a name="subnet"></a>Can I have an Azure Bastion subnet of size /27 or smaller (/28, /29, etc.)?
-No. You don't need an RDP or SSH client to access the RDP/SSH to your Azure virtual machine in your Azure portal. Use the [Azure portal](https://portal.azure.com) to let you get RDP/SSH access to your virtual machine directly in the browser.
+For Azure Bastion resources deployed on or after November 2, 2021, the minimum AzureBastionSubnet size is /26 or larger (/25, /24, etc.). All Azure Bastion resources deployed in subnets of size /27 prior to this date are unaffected by this change and will continue to work. However, we highly recommend increasing the size of any existing AzureBastionSubnet to /26 in case you choose to take advantage of [host scaling](./configure-host-scaling.md) in the future.
-### <a name="agent"></a>Do I need an agent running in the Azure virtual machine?
+### <a name="subnet"></a> Can I deploy multiple Azure resources in my Azure Bastion subnet?
-No. You don't need to install an agent or any software on your browser or your Azure virtual machine. The Bastion service is agentless and doesn't require any additional software for RDP/SSH.
+No. The Azure Bastion subnet (*AzureBastionSubnet*) is reserved only for the deployment of your Azure Bastion resource.
-### <a name="rdpfeaturesupport"></a>What features are supported in an RDP session?
+### <a name="udr"></a>Is user-defined routing (UDR) supported on an Azure Bastion subnet?
-At this time, only text copy/paste is supported. Feel free to share your feedback about new features on the [Azure Bastion Feedback page](https://feedback.azure.com/d365community/forum/8ae9bf04-8326-ec11-b6e6-000d3a4f0789?c=c109f019-8326-ec11-b6e6-000d3a4f0789).
+No. UDR isn't supported on an Azure Bastion subnet.
-### Does Azure Bastion support file transfer?
-Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or SSH client. Note that file transfer is supported using the native client only. At this time, you cannot upload or download files using PowerShell or via the Azure portal. To learn more, see [Upload and download files using the native client](vm-upload-download-native.md).
+For scenarios that include both Azure Bastion and Azure Firewall/Network Virtual Appliance (NVA) in the same virtual network, you donΓÇÖt need to force traffic from an Azure Bastion subnet to Azure Firewall because the communication between Azure Bastion and your VMs is private. For more information, see [Accessing VMs behind Azure Firewall with Bastion](https://azure.microsoft.com/blog/accessing-virtual-machines-behind-azure-firewall-with-azure-bastion/).
-### <a name="aadj"></a>Does Bastion hardening work with AADJ VM extension-joined VMs?
+### <a name="upgradesku"></a> Can I upgrade from a Basic SKU to a Standard SKU?
-This feature doesn't work with AADJ VM extension-joined machines using Azure AD users. For more information, see [Windows Azure VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#requirements).
+Yes. For steps, see [Upgrade a SKU](upgrade-sku.md). For more information about SKUs, see the [Configuration settings](configuration-settings.md#skus) article.
-### <a name="browsers"></a>Which browsers are supported?
+### <a name="downgradesku"></a> Can I downgrade from a Standard SKU to a Basic SKU?
-The browser must support HTML 5. Use the Microsoft Edge browser or Google Chrome on Windows. For Apple Mac, use Google Chrome browser. Microsoft Edge Chromium is also supported on both Windows and Mac, respectively.
+No. Downgrading from a Standard SKU to a Basic SKU isn't supported. For more information about SKUs, see the [Configuration settings](configuration-settings.md#skus) article.
-### <a name="pricingpage"></a>What is the pricing?
+### <a name="virtual-desktop"></a>Does Bastion support connectivity to Azure Virtual Desktop?
-For more information, see the [pricing page](https://aka.ms/BastionHostPricing).
+No, Bastion connectivity to Azure Virtual Desktop isn't supported.
-### <a name="data"></a>Where does Azure Bastion store customer data?
+### <a name="session"></a>Why do I get "Your session has expired" error message before the Bastion session starts?
-Azure Bastion doesn't move or store customer data out of the region it is deployed in.
+A session should be initiated only from the Azure portal. Sign in to the Azure portal and begin your session again. If you go to the URL directly from another browser session or tab, this error is expected. It helps ensure that your session is more secure and that the session can be accessed only through the Azure portal.
+
+### <a name="udr"></a>How do I handle deployment failures?
+
+Review any error messages and [raise a support request in the Azure portal](../azure-portal/supportability/how-to-create-azure-support-request.md) as needed. Deployment failures may result from [Azure subscription limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). Specifically, customers may encounter a limit on the number of public IP addresses allowed per subscription that causes the Azure Bastion deployment to fail.
+
+### <a name="dr"></a>How do I incorporate Azure Bastion in my Disaster Recovery plan?
+
+Azure Bastion is deployed within VNets or peered VNets, and is associated to an Azure region. You're responsible for deploying Azure Bastion to a Disaster Recovery (DR) site VNet. In the event of an Azure region failure, perform a failover operation for your VMs to the DR region. Then, use the Azure Bastion host that's deployed in the DR region to connect to the VMs that are now deployed there.
+
+## <a name="vm"></a>VMs and connections
### <a name="roles"></a>Are any roles required to access a virtual machine?
In order to make a connection, the following roles are required:
* Reader role on the virtual machine. * Reader role on the NIC with private IP of the virtual machine. * Reader role on the Azure Bastion resource.
-* Reader Role on the Virtual Network of the target virtual machine (in the case that the Bastion is in a peered Virtual Network).
+* Reader Role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network).
+
+### <a name="publicip"></a>Do I need a public IP on my virtual machine to connect via Azure Bastion?
+
+No. When you connect to a VM using Azure Bastion, you don't need a public IP on the Azure virtual machine that you're connecting to. The Bastion service will open the RDP/SSH session/connection to your virtual machine over the private IP of your virtual machine, within your virtual network.
+
+### <a name="rdpssh"></a>Do I need an RDP or SSH client?
+
+No. You don't need an RDP or SSH client to access the RDP/SSH to your Azure virtual machine in your Azure portal. Use the [Azure portal](https://portal.azure.com) to let you get RDP/SSH access to your virtual machine directly in the browser.
+
+### <a name="agent"></a>Do I need an agent running in the Azure virtual machine?
+
+No. You don't need to install an agent or any software on your browser or your Azure virtual machine. The Bastion service is agentless and doesn't require any additional software for RDP/SSH.
+
+### <a name="rdpfeaturesupport"></a>What features are supported in an RDP session?
+
+At this time, only text copy/paste is supported. Feel free to share your feedback about new features on the [Azure Bastion Feedback page](https://feedback.azure.com/d365community/forum/8ae9bf04-8326-ec11-b6e6-000d3a4f0789?c=c109f019-8326-ec11-b6e6-000d3a4f0789).
+
+### Does Azure Bastion support file transfer?
+
+Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or SSH client. File transfer is supported using the native client only. At this time, you canΓÇÖt upload or download files using PowerShell or via the Azure portal. To learn more, see [Upload and download files using the native client](vm-upload-download-native.md).
+
+### <a name="aadj"></a>Does Bastion hardening work with AADJ VM extension-joined VMs?
+
+This feature doesn't work with AADJ VM extension-joined machines using Azure AD users. For more information, see [Windows Azure VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#requirements).
### <a name="rdscal"></a>Does Azure Bastion require an RDS CAL for administrative purposes on Azure-hosted VMs?
-No, access to Windows Server VMs by Azure Bastion does not require an [RDS CAL](https://www.microsoft.com/p/windows-server-remote-desktop-services-cal/dg7gmgf0dvsv?activetab=pivot:overviewtab) when used solely for administrative purposes.
+No, access to Windows Server VMs by Azure Bastion doesn't require an [RDS CAL](https://www.microsoft.com/p/windows-server-remote-desktop-services-cal/dg7gmgf0dvsv?activetab=pivot:overviewtab) when used solely for administrative purposes.
### <a name="keyboard"></a>Which keyboard layouts are supported during the Bastion remote session? Azure Bastion currently supports the following keyboard layouts inside the VM:+ * en-us-qwerty * en-gb-qwerty * de-ch-qwertz
Azure Bastion currently supports the following keyboard layouts inside the VM:
To establish the correct key mappings for your target language, you must set either your language on your local computer or your language inside the target VM to English (United States). That is, your local computer language must be set to English (United States) while your target VM language is set to your target language, or vice versa. You can add English (United States) language to your machine in your computer settings.
-### <a name="timezone"></a>Does Azure Bastion support timezone configuration or timezone redirection for target VMs?
-
-Azure Bastion currently does not support timezone redirection and is not timezone configurable.
+### <a name="res"></a>What is the maximum screen resolution supported via Bastion?
-### <a name="subnet"></a>Can I have an Azure Bastion subnet of size /27 or smaller (/28, /29, etc.)?
-
-For Azure Bastion resources deployed on or after November 2, 2021, the minimum AzureBastionSubnet size is /26 or larger (/25, /24, etc.). All Azure Bastion resources deployed in subnets of size /27 prior to this date are unaffected by this change and will continue to work, but we highly recommend increasing the size of any existing AzureBastionSubnet to /26 in case you choose to take advantage of [host scaling](./configure-host-scaling.md) in the future.
-
-### <a name="udr"></a>Is user-defined routing (UDR) supported on an Azure Bastion subnet?
-
-No. UDR is not supported on an Azure Bastion subnet.
+Currently, 1920x1080 (1080p) is the maximum supported resolution.
-For scenarios that include both Azure Bastion and Azure Firewall/Network Virtual Appliance (NVA) in the same virtual network, you donΓÇÖt need to force traffic from an Azure Bastion subnet to Azure Firewall because the communication between Azure Bastion and your VMs is private. For more information, see [Accessing VMs behind Azure Firewall with Bastion](https://azure.microsoft.com/blog/accessing-virtual-machines-behind-azure-firewall-with-azure-bastion/).
-
-### <a name="upgradesku"></a> Can I upgrade from a Basic SKU to a Standard SKU?
-
-Yes. For steps, see [Upgrade a SKU](upgrade-sku.md). For more information about SKUs, see the [Configuration settings](configuration-settings.md#skus) article.
-
-### <a name="downgradesku"></a> Can I downgrade from a Standard SKU to a Basic SKU?
-
-No. Downgrading from a Standard SKU to a Basic SKU is not supported. For more information about SKUs, see the [Configuration settings](configuration-settings.md#skus) article.
-
-### <a name="subnet"></a> Can I deploy multiple Azure resources in my Azure Bastion subnet?
-
-No. The Azure Bastion subnet (*AzureBastionSubnet*) is reserved only for the deployment of your Azure Bastion resource.
-
-### <a name="session"></a>Why do I get "Your session has expired" error message before the Bastion session starts?
-
-A session should be initiated only from the Azure portal. Sign in to the Azure portal and begin your session again. If you go to the URL directly from another browser session or tab, this error is expected. It helps ensure that your session is more secure and that the session can be accessed only through the Azure portal.
-
-### <a name="udr"></a>How do I handle deployment failures?
-
-Review any error messages and [raise a support request in the Azure portal](../azure-portal/supportability/how-to-create-azure-support-request.md) as needed. Deployment failures may result from [Azure subscription limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). Specifically, customers may encounter a limit on the number of public IP addresses allowed per subscription that causes the Azure Bastion deployment to fail.
-
-### <a name="dr"></a>How do I incorporate Azure Bastion in my Disaster Recovery plan?
+### <a name="timezone"></a>Does Azure Bastion support timezone configuration or timezone redirection for target VMs?
-Azure Bastion is deployed within VNets or peered VNets, and is associated to an Azure region. You are responsible for deploying Azure Bastion to a Disaster Recovery (DR) site VNet. In the event of an Azure region failure, perform a failover operation for your VMs to the DR region. Then, use the Azure Bastion host that's deployed in the DR region to connect to the VMs that are now deployed there.
+Azure Bastion currently doesn't support timezone redirection and isn't timezone configurable.
## <a name="peering"></a>VNet peering
Yes. By default, a user sees the Bastion host that is deployed in the same virtu
### If my peered VNets are deployed in different subscriptions, will connectivity via Bastion work?
-Yes, connectivity via Bastion will continue to work for peered VNets across different subscription for a single Tenant. Subscriptions across two different Tenants are not supported. To see Bastion in the **Connect** drop down menu, the user must select the subs they have access to in **Subscription > global subscription**.
+Yes, connectivity via Bastion will continue to work for peered VNets across different subscription for a single Tenant. Subscriptions across two different Tenants aren't supported. To see Bastion in the **Connect** drop down menu, the user must select the subs they have access to in **Subscription > global subscription**.
:::image type="content" source="./media/bastion-faq/global-subscriptions.png" alt-text="Global subscriptions filter." lightbox="./media/bastion-faq/global-subscriptions.png":::
-### Does Bastion support connectivity to Azure Virtual Desktop?
-No, Bastion connectivity to Azure Virtual Desktop is not supported.
- ### I have access to the peered VNet, but I can't see the VM deployed there. Make sure the user has **read** access to both the VM, and the peered VNet. Additionally, check under IAM that the user has **read** access to following resources:
Make sure the user has **read** access to both the VM, and the peered VNet. Addi
* Reader role on the virtual machine. * Reader role on the NIC with private IP of the virtual machine. * Reader role on the Azure Bastion resource.
-* Reader Role on the Virtual Network (Not needed if there is no peered virtual network).
+* Reader Role on the virtual network (Not needed if there isn't a peered virtual network).
|Permissions|Description|Permission type| ||| | |Microsoft.Network/bastionHosts/read |Gets a Bastion Host|Action|
-|Microsoft.Network/virtualNetworks/BastionHosts/action |Gets Bastion Host references in a Virtual Network.|Action|
-|Microsoft.Network/virtualNetworks/bastionHosts/default/action|Gets Bastion Host references in a Virtual Network.|Action|
+|Microsoft.Network/virtualNetworks/BastionHosts/action |Gets Bastion Host references in a virtual network.|Action|
+|Microsoft.Network/virtualNetworks/bastionHosts/default/action|Gets Bastion Host references in a virtual network.|Action|
|Microsoft.Network/networkInterfaces/read|Gets a network interface definition.|Action| |Microsoft.Network/networkInterfaces/ipconfigurations/read|Gets a network interface IP configuration definition.|Action| |Microsoft.Network/virtualNetworks/read|Get the virtual network definition|Action| |Microsoft.Network/virtualNetworks/subnets/virtualMachines/read|Gets references to all the virtual machines in a virtual network subnet|Action| |Microsoft.Network/virtualNetworks/virtualMachines/read|Gets references to all the virtual machines in a virtual network|Action|-
-### What is the maximum screen resolution supported via Bastion?
-Currently, 1920x1080 (1080p) is the maximum supported resolution.
bastion Create Host Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/create-host-cli.md
Title: 'Create a Bastion host using Azure CLI | Azure Bastion'
-description: Learn how to create and delete a bastion host using Azure CLI.
-
+ Title: 'Deploy Bastion:CLI'
+
+description: Learn how to deploy Azure Bastion using CLI
Previously updated : 09/22/2021 Last updated : 03/02/2022
-# Customer intent: As someone with a networking background, I want to create an Azure Bastion host.
+# Customer intent: As someone with a networking background, I want to deploy Bastion and connect to a VM.
ms.devlang: azurecli
-# Create an Azure Bastion host using Azure CLI
+# Deploy Bastion using Azure CLI
-This article shows you how to create an Azure Bastion host using Azure CLI. Once you provision the Azure Bastion service in your virtual network, the seamless RDP/SSH experience is available to all of the VMs in the same virtual network. Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine.
+This article shows you how to deploy Azure Bastion using CLI. Azure Bastion is a PaaS service that's maintained for you, not a bastion host that you install on your VM and maintain yourself. An Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+
+Once you deploy Bastion to your virtual network, you can connect to your VMs via private IP address. This seamless RDP/SSH experience is available to all the VMs in the same virtual network. If your VM has a public IP address that you don't need for anything else, you can remove it.
+
+You can also deploy Bastion by using the following other methods:
-Optionally, you can create an Azure Bastion host by using the following methods:
* [Azure portal](./tutorial-create-host-portal.md) * [Azure PowerShell](bastion-create-host-powershell.md)-
+* [Quickstart - deploy with default settings](quickstart-host-portal.md)
## Prerequisites
+### Azure subscription
+ Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial).
+### Azure CLI
+ [!INCLUDE [Cloud Shell CLI](../../includes/vpn-gateway-cloud-shell-cli.md)]
- > [!NOTE]
- > The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
- >
+> [!NOTE]
+> The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+>
-## <a name="createhost"></a>Create a bastion host
+## <a name="createhost"></a>Deploy Bastion
-This section helps you create a new Azure Bastion resource using Azure CLI.
+This section helps you deploy Azure Bastion using Azure CLI.
> [!NOTE] > As shown in the examples, use the `--location` parameter with `--resource-group` for every command to ensure that the resources are deployed together.
This section helps you create a new Azure Bastion resource using Azure CLI.
az network vnet create --resource-group MyResourceGroup --name MyVnet --address-prefix 10.0.0.0/16 --subnet-name AzureBastionSubnet --subnet-prefix 10.0.0.0/24 --location northeurope ```
-2. Create a public IP address for Azure Bastion. The public IP is the public IP address the Bastion resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you are creating.
+1. Create a public IP address for Azure Bastion. The public IP is the public IP address the Bastion resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating.
+
+ The following example uses the **Standard SKU**. The Standard SKU lets you configure more Bastion features and connect to VMs using more connection types. For more information, see [Bastion SKUs](configuration-settings.md#skus).
```azurecli-interactive az network public-ip create --resource-group MyResourceGroup --name MyIp --sku Standard --location northeurope ```
-3. Create a new Azure Bastion resource in the AzureBastionSubnet of your virtual network. It takes about 5 minutes for the Bastion resource to create and deploy.
+1. Create a new Azure Bastion resource in the AzureBastionSubnet of your virtual network. It takes about 10 minutes for the Bastion resource to create and deploy.
```azurecli-interactive az network bastion create --name MyBastion --public-ip-address MyIp --resource-group MyResourceGroup --vnet-name MyVnet --location northeurope ```
-## Disassociate the VM public IP address
-Azure Bastion does not use the public IP address to connect to the client VM. If you do not need the public IP address for your VM, you can disassociate the public IP address by using the steps in this article: [Dissociate a public IP address from an Azure VM](../virtual-network/ip-services/remove-public-ip-address-vm.md).
+## <a name="connect"></a>Connect to a VM
+
+You can use any of the following articles to connect to a VM that's located in the virtual network to which you deployed Bastion. You can also use the [Connection steps](#steps) in the section below. Some connection types require the [Standard SKU](configuration-settings.md#skus).
++
+### <a name="steps"></a>Connection steps
++
+## <a name="ip"></a>Remove VM public IP address
+
+Azure Bastion doesn't use the public IP address to connect to the client VM. If you don't need the public IP address for your VM, you can disassociate the public IP address. See [Dissociate a public IP address from an Azure VM](../virtual-network/ip-services/remove-public-ip-address-vm.md).
## Next steps
-* Read the [Bastion FAQ](bastion-faq.md) for additional information.
* To use Network Security Groups with the Azure Bastion subnet, see [Work with NSGs](bastion-nsg.md).
+* To understand VNet peering, see [VNet peering and Azure Bastion](vnet-peering.md).
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
-# Tutorial: Deploy Bastion using manual settings: Azure portal
+# Tutorial: Deploy Bastion using the Azure portal
This tutorial helps you deploy Azure Bastion from the Azure portal using manual settings. When you use manual settings, you can specify configuration values such as instance counts and the SKU at the time of deployment. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines in the virtual network via Bastion using the private IP address of the VM. When you connect to a VM, it doesn't need a public IP address, client software, agent, or a special configuration.
cognitive-services Concept Brand Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-brand-detection.md
Title: Brand detection - Computer Vision
-description: This article discusses a specialized mode of object detection; brand and/or logo detection using the Computer Vision API.
+description: Learn about brand and logo detection, a specialized mode of object detection, using the Computer Vision API.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview.md
Previously updated : 09/28/2021 Last updated : 02/28/2022 keywords: computer vision, computer vision applications, computer vision service
cognitive-services Spatial Analysis Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-operations.md
The Spatial Analysis container implements the following operations:
All of the above operations are also available in the `.debug` version of the service (for example, `cognitiveservices.vision.spatialanalysis-personcount.debug`). Debug has the capability to visualize video frames as they're being processed. You'll need to run `xhost +` on the host computer to enable the visualization of video frames and events.
-Spatial Analysis can also be run with [Live Video Analytics](../../azure-video-analyzer/video-analyzer-docs/overview.md) as the Video AI module. Append `.livevideoanalytics` to the operation (for example, `cognitiveservices.vision.spatialanalysis-personcount.livevideoanalytics`).
-
-<!--more details on the setup can be found in the [LVA Setup page](LVA-Setup.md). Below is the list of the operations supported with Live Video Analytics. -->
--
-Live Video Analytics operations are also available in the `.debug` version (for example, you can use `cognitiveservices.vision.spatialanalysis-personcount.livevideoanalytics.debug`).
> [!IMPORTANT] > The Computer Vision AI models detect and locate human presence in video footage and output a bounding box around the human body. The AI models do not attempt to discover the identities or demographics of individuals.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/overview.md
Previously updated : 08/25/2021 Last updated : 02/28/2022 keywords: image recognition, image identifier, image recognition app, custom vision
keywords: image recognition, image identifier, image recognition app, custom vis
# What is Custom Vision?
-Azure Custom Vision is an image recognition service that lets you build, deploy, and improve your own image identifier models. An image identifier applies labels (which represent classifications or objects) to images, according to their detected visual characteristics. Unlike the [Computer Vision](../computer-vision/overview.md) service, Custom Vision allows you to specify your own labels and train custom models to detect them.
+Azure Custom Vision is an image recognition service that lets you build, deploy, and improve your own image identifier models. An image identifier applies labels to images, according to their detected visual characteristics. Each label represents a classifications or objects. Unlike the [Computer Vision](../computer-vision/overview.md) service, Custom Vision allows you to specify your own labels and train custom models to detect them.
This documentation contains the following types of articles: * The [quickstarts](./getting-started-build-a-classifier.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
This documentation contains the following types of articles:
## What it does
-The Custom Vision service uses a machine learning algorithm to analyze images. You, the developer, submit groups of images that feature and lack the characteristics in question. You label the images yourself at the time of submission. Then, the algorithm trains to this data and calculates its own accuracy by testing itself on those same images. Once you've trained the algorithm, you can test, retrain, and eventually use it in your image recognition app to [classify images](getting-started-build-a-classifier.md). You can also [export the model](export-your-model.md) itself for offline use.
+The Custom Vision service uses a machine learning algorithm to analyze images. You, the developer, submit groups of images that have and don't have the characteristics in question. You label the images yourself at the time of submission. Then the algorithm trains to this data and calculates its own accuracy by testing itself on those same images. Once you've trained the algorithm, you can test, retrain, and eventually use it in your image recognition app to [classify images](getting-started-build-a-classifier.md). You can also [export the model](export-your-model.md) itself for offline use.
### Classification and object detection
-Custom Vision functionality can be divided into two features. **[Image classification](getting-started-build-a-classifier.md)** applies one or more labels to an image. **[Object detection](get-started-build-detector.md)** is similar, but it also returns the coordinates in the image where the applied label(s) can be found.
+Custom Vision functionality can be divided into two features. **[Image classification](getting-started-build-a-classifier.md)** applies one or more labels to an entire image. **[Object detection](get-started-build-detector.md)** is similar, but it returns the coordinates in the image where the applied label(s) can be found.
### Optimization
Additionally, you can choose from several variations of the Custom Vision algori
## What it includes
-The Custom Vision Service is available as a set of native SDKs as well as through a web-based interface on the [Custom Vision website](https://customvision.ai/). You can create, test, and train a model through either interface or use both together.
+The Custom Vision Service is available as a set of native SDKs as well as through a web-based interface on the [Custom Vision portal](https://customvision.ai/). You can create, test, and train a model through either interface or use both together.
### Supported browsers for Custom Vision web portal
-The Custom Vision web interface can be used by the following web browsers:
+The Custom Vision portal can be used by the following web browsers:
- Microsoft Edge (latest version) - Google Chrome (latest version)
As with all of the Cognitive Services, developers using the Custom Vision servic
## Next steps
-Follow the [Build a classifier](getting-started-build-a-classifier.md) guide to get started using Custom Vision on the web portal, or complete a [quickstart](quickstarts/image-classification.md) to implement the basic scenarios in code.
+Follow the [Build a classifier](getting-started-build-a-classifier.md) quickstart to get started using Custom Vision on the web portal, or complete an [SDK quickstart](quickstarts/image-classification.md) to implement the basic scenarios in code.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Overview.md
Previously updated : 09/27/2021 Last updated : 02/28/2022 keywords: facial recognition, facial recognition software, facial analysis, face matching, face recognition app, face search by image, facial recognition search
This documentation contains the following types of articles:
## Example use cases
-Identity verification: Verify someone's identity against a government-issued ID card like a passport or driver's license or other enrollment image. You can use this verification to grant access to digital or physical services or recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeated when they access a digital or physical service.
+**Identity verification**: Verify someone's identity against a government-issued ID card like a passport or driver's license or other enrollment image. You can use this verification to grant access to digital or physical services or recover an account. Specific access scenarios include opening a new account, verifying a worker, or administering an online assessment. Identity verification can be done once when a person is onboarded, and repeated when they access a digital or physical service.
-Touchless access control: Compared to todayΓÇÖs methods like cards or tickets, opt-in face identification enables an enhanced access control experience while reducing the hygiene and security risks from card sharing, loss, or theft. Facial recognition assists the check-in process with a human in the loop for check-ins in airports, stadiums, theme parks, buildings, reception kiosks at offices, hospitals, gyms, clubs, or schools.
+**Touchless access control**: Compared to todayΓÇÖs methods like cards or tickets, opt-in face identification enables an enhanced access control experience while reducing the hygiene and security risks from card sharing, loss, or theft. Facial recognition assists the check-in process with a human in the loop for check-ins in airports, stadiums, theme parks, buildings, reception kiosks at offices, hospitals, gyms, clubs, or schools.
-Face redaction: Redact or blur detected faces of people recorded in a video to protect their privacy.
+**Face redaction**: Redact or blur detected faces of people recorded in a video to protect their privacy.
## Face detection and analysis
The Group operation divides a set of unknown faces into several smaller groups b
All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Group API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238) reference documentation.
-## Sample apps
-
-The following sample applications show a few ways to use the Face service:
--- [FamilyNotes UWP app](https://github.com/Microsoft/Windows-appsample-familynotes) is a Universal Windows Platform (UWP) app that uses face identification along with speech, Cortana, ink, and camera in a family note-sharing scenario.- ## Data privacy and security As with all of the Cognitive Services resources, developers who use the Face service must be aware of Microsoft's policies on customer data. For more information, see the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center.
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/ReleaseNotes.md
The Azure Face service is updated on an ongoing basis. Use this article to stay up to date with new features, enhancements, fixes, and documentation updates.
+## February 2022
+
+### New Quality Attribute in Detection_01 and Detection_03
+* To help system builders and their customers capture high quality images which are necessary for high quality outputs from Face API, weΓÇÖre introducing a new quality attribute **QualityForRecognition** to help decide whether an image is of sufficient quality to attempt face recognition. The value is an informal rating of low, medium, or high. The new attribute is only available when using any combinations of detection models `detection_01` or `detection_03`, and recognition models `recognition_03` or `recognition_04`. Only "high" quality images are recommended for person enrollment and quality above "medium" is recommended for identification scenarios. To learn more about the new quality attribute, see [Face detection and attributes](concepts/face-detection.md) and see how to use it with [QuickStart](https://docs.microsoft.com/azure/cognitive-services/face/quickstarts/client-libraries?tabs=visual-studio&pivots=programming-language-csharp).
++ ## July 2021 ### New HeadPose and Landmarks improvements for Detection_03
cognitive-services Cognitive Services Apis Create Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-apis-create-account-cli.md
Title: Create a Cognitive Services resource via the Azure CLI
+ Title: Create a Cognitive Services resource using the Azure CLI
-description: Get started with Azure Cognitive Services by creating and subscribing to a resource via the Azure CLI.
+description: Get started with Azure Cognitive Services by using Azure CLI commands to create and subscribe to a resource.
keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services Previously updated : 06/04/2021 Last updated : 03/02/2022 ms.devlang: azurecli
-# Quickstart: Create a Cognitive Services resource via the Azure CLI
+# Quickstart: Create a Cognitive Services resource using the Azure CLI
-Use this quickstart to get started with Azure Cognitive Services via the [Azure CLI](/cli/azure/install-azure-cli).
+Use this quickstart to get started with Azure Cognitive Services using [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli) commands.
-Azure Cognitive Services are cloud-base services with REST APIs, and client library SDKs available to help developers build cognitive intelligence into applications without having direct artificial intelligence (AI) or data science skills or knowledge. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, understand, and even begin to reason.
+Azure Cognitive Services are cloud-based services with REST APIs, and client library SDKs available to help developers build cognitive intelligence into applications without having direct artificial intelligence (AI) or data science skills or knowledge. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, understand, and even begin to reason.
Cognitive Services are represented by Azure [resources](../azure-resource-manager/management/manage-resources-portal.md) that you create in your Azure subscription. After creating the resource, Use the keys and endpoint generated for you to authenticate your applications.
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/what-are-cognitive-services.md
Title: What are Azure Cognitive Services?
description: Cognitive Services makes AI accessible to every developer without requiring machine-learning and data-science expertise. You need to make an API call from your application to add the ability to see (advanced image search and recognition), hear, speak, search, and decision-making into your apps. -+ keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services, cognitive understanding, cognitive features Previously updated : 01/31/2022- Last updated : 02/28/2022+ # What are Azure Cognitive Services?
-Azure Cognitive Services are cloud-based services with REST APIs, client library SDKs, and user interfaces available to help you build cognitive intelligence into your applications. You can add cognitive features to your applications without having artificial intelligence (AI) or data science skills. Cognitive Services comprises various AI services that enable you to build cognitive solutions that can see, hear, speak, understand, and even make decisions.
+Azure Cognitive Services are cloud-based artificial intelligence (AI) services that help you build cognitive intelligence into your applications. They are available as REST APIs, client library SDKs, and user interfaces. You can add cognitive features to your applications without having AI or data science skills. Cognitive Services enable you to build cognitive solutions that can see, hear, speak, understand, and even make decisions.
## Categories of Cognitive Services
Cognitive Services can be categorized into four main pillars:
* Language * Decision
+See the tables below to learn about the services offered within those categories.
+ ## Vision APIs
-|Service Name|Service Description|
-|:--|:|
-|[Computer Vision](./computer-vision/index.yml "Computer Vision")|The Computer Vision service provides you with access to advanced cognitive algorithms for processing images and returning information. See [Computer Vision quickstart](./computer-vision/quickstarts-sdk/client-library.md) to get started with the service.|
-|[Custom Vision](./custom-vision-service/index.yml "Custom Vision Service")|The Custom Vision Service lets you build, deploy, and improve your own image classifiers. An image classifier is an AI service that applies labels to images, based on their visual characteristics. |
-|[Face](./face/index.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition. See [Face quickstart](./face/quickstarts/client-libraries.md) to get started with the service.|
+|Service Name|Service Description|Quickstart
+|:--|:|--|
+|[Computer Vision](./computer-vision/index.yml "Computer Vision")|The Computer Vision service provides you with access to advanced cognitive algorithms for processing images and returning information.| [Computer Vision quickstart](./computer-vision/quickstarts-sdk/client-library.md)|
+|[Custom Vision](./custom-vision-service/index.yml "Custom Vision Service")|The Custom Vision Service lets you build, deploy, and improve your own image classifiers. An image classifier is an AI service that applies labels to images, based on their visual characteristics. | [Custom Vision quickstart](./custom-vision-service/getting-started-build-a-classifier.md)|
+|[Face](./face/index.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition.| [Face quickstart](./face/quickstarts/client-libraries.md)|
## Speech APIs
-|Service Name|Service Description|
-|:--|:|
-|[Speech service](./speech-service/index.yml "Speech service")|Speech service adds speech-enabled features to applications. Speech service includes various capabilities like speech-to-text, text-to-speech, speech translation, and many more.|
+|Service Name|Service Description| Quickstart|
+|:--|:|--|
+|[Speech service](./speech-service/index.yml "Speech service")|Speech service adds speech-enabled features to applications. Speech service includes various capabilities like speech-to-text, text-to-speech, speech translation, and many more.| Go to the [Speech documentation](./speech-service/index.yml) to choose a subservice quickstart.|
<!-- |[Speaker Recognition API](./speech-service/speaker-recognition-overview.md "Speaker Recognition API") (Preview)|The Speaker Recognition API provides algorithms for speaker identification and verification.| |[Bing Speech](./speech-service/how-to-migrate-from-bing-speech.md "Bing Speech") (Retiring)|The Bing Speech API provides you with an easy way to create speech-enabled features in your applications.|
Cognitive Services can be categorized into four main pillars:
## Language APIs
-|Service Name|Service Description|
-|:--|:|
-|[Azure Cognitive Service for language](./language-service/index.yml "Language service")| Azure Cognitive Service for Language provides several Natural Language Processing (NLP) features to understand and analyze text.|
-|[Translator](./translator/index.yml "Translator")|Translator provides machine-based text translation in near real time.|
-|[Language Understanding LUIS](./luis/index.yml "Language Understanding")|Language Understanding (LUIS) is a cloud-based conversational AI service that applies custom machine-learning intelligence to a user's conversational or natural language text to predict overall meaning and pull out relevant information. [See LUIS quickstart](./luis/luis-get-started-create-app.md) to get started with the service.|
-|[QnA Maker](./qnamaker/index.yml "QnA Maker")|QnA Maker allows you to build a question and answer service from your semi-structured content. [See QnA Maker quickstart](./qnamaker/quickstarts/create-publish-knowledge-base.md) to get started with the service.|
+|Service Name|Service Description| Quickstart|
+|:--|:|--|
+|[Language service](./language-service/index.yml "Language service")| Azure Language service provides several Natural Language Processing (NLP) features to understand and analyze text.| Go to the [Language documentation](./language-service/index.yml) to choose a subservice quickstart.|
+|[Translator](./translator/index.yml "Translator")|Translator provides machine-based text translation in near real time.| [Translator quickstart](./translator/quickstart-translator.md)|
+|[Language Understanding LUIS](./luis/index.yml "Language Understanding")|Language Understanding (LUIS) is a cloud-based conversational AI service that applies custom machine-learning intelligence to a user's conversational or natural language text to predict overall meaning and pull out relevant information. |[LUIS quickstart](./luis/luis-get-started-create-app.md)|
+|[QnA Maker](./qnamaker/index.yml "QnA Maker")|QnA Maker allows you to build a question and answer service from your semi-structured content.| [QnA Maker quickstart](./qnamaker/quickstarts/create-publish-knowledge-base.md) |
## Decision APIs
-|Service Name|Service Description|
-|:--|:|
-|[Anomaly Detector](./anomaly-detector/index.yml "Anomaly Detector") |Anomaly Detector allows you to monitor and detect abnormalities in your time series data. See [Anomaly Detector quickstart](./anomaly-detector/quickstarts/client-libraries.md) to get started with the service.|
-|[Content Moderator](./content-moderator/overview.md "Content Moderator")|Content Moderator provides monitoring for possible offensive, undesirable, and risky content. See [Content Moderator quickstart](./content-moderator/client-libraries.md) to get started with the service.|
-|[Personalizer](./personalizer/index.yml "Personalizer")|Personalizer allows you to choose the best experience to show to your users, learning from their real-time behavior. See [Personalizer quickstart](./personalizer/quickstart-personalizer-sdk.md) to get started with the service.|
+|Service Name|Service Description| Quickstart|
+|:--|:|--|
+|[Anomaly Detector](./anomaly-detector/index.yml "Anomaly Detector") |Anomaly Detector allows you to monitor and detect abnormalities in your time series data.| [Anomaly Detector quickstart](./anomaly-detector/quickstarts/client-libraries.md) |
+|[Content Moderator](./content-moderator/overview.md "Content Moderator")|Content Moderator provides monitoring for possible offensive, undesirable, and risky content. | [Content Moderator quickstart](./content-moderator/client-libraries.md)|
+|[Personalizer](./personalizer/index.yml "Personalizer")|Personalizer allows you to choose the best experience to show to your users, learning from their real-time behavior. |[Personalizer quickstart](./personalizer/quickstart-personalizer-sdk.md)|
-## Get started with Cognitive Services
+## Create a Cognitive Services resource
-Start by creating a Cognitive Services resource with hands-on quickstarts using the following methods:
+You can create a Cognitive Services resource with hands-on quickstarts using any of the following methods:
* [Azure portal](cognitive-services-apis-create-account.md?tabs=multiservice%2Cwindows "Azure portal") * [Azure CLI](cognitive-services-apis-create-account-cli.md?tabs=windows "Azure CLI") * [Azure SDK client libraries](cognitive-services-apis-create-account-cli.md?tabs=windows "cognitive-services-apis-create-account-client-library?pivots=programming-language-csharp") * [Azure Resource Manager (ARM template)](./create-account-resource-manager-template.md?tabs=portal "Azure Resource Manager (ARM template)")
-## Using Cognitive Services in different development environments
+## Use Cognitive Services in different development environments
With Azure and Cognitive Services, you have access to several development options, such as:
With Azure and Cognitive Services, you have access to several development option
To learn more, see [Cognitive Services development options](./cognitive-services-development-options.md).
+### Containers for Cognitive Services
+
+Azure Cognitive Services also provides several Docker containers that let you use the same APIs that are available from Azure, on-premises. These containers give you the flexibility to bring Cognitive Services closer to your data for compliance, security, or other operational reasons. For more information, see [Cognitive Services Containers](cognitive-services-container-support.md "Cognitive Services Containers").
+ <!-- ## Subscription management
Once you are signed in with your Microsoft Account, you can access [My subscript
All APIs have a free tier, which has usage and throughput limits. You can increase these limits by using a paid offering and selecting the appropriate pricing tier option when deploying the service in the Azure portal. [Learn more about the offerings and pricing](https://azure.microsoft.com/pricing/details/cognitive-services/ "offerings and pricing"). You'll need to set up an Azure subscriber account with a credit card and a phone number. If you have a special requirement or simply want to talk to sales, click "Contact us" button at the top the pricing page. -->
-## Using Cognitive Services securely
-
-Azure Cognitive Services provides a layered security model, including [authentication](authentication.md "Authentication") via Azure Active Directory credentials, a valid resource key, and [Azure Virtual Networks](cognitive-services-virtual-networks.md "Azure Virtual Networks").
-
-## Containers for Cognitive Services
-
- Azure Cognitive Services provides several Docker containers that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Cognitive Services closer to your data for compliance, security, or other operational reasons. For more information, see [Cognitive Services Containers](cognitive-services-container-support.md "Cognitive Services Containers").
## Regional availability
The APIs in Cognitive Services are hosted on a growing network of Microsoft-mana
Looking for a region we don't support yet? Let us know by filing a feature request on our [UserVoice forum](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858).
-## Supported cultural languages
+## Language support
Cognitive Services supports a wide range of cultural languages at the service level. You can find the language availability for each API in the [supported languages list](language-support.md "Supported languages list").
+## Security
+
+Azure Cognitive Services provides a layered security model, including [authentication](authentication.md "Authentication") with Azure Active Directory credentials, a valid resource key, and [Azure Virtual Networks](cognitive-services-virtual-networks.md "Azure Virtual Networks").
+ ## Certifications and compliance Cognitive Services has been awarded certifications such as CSA STAR Certification, FedRAMP Moderate, and HIPAA BAA. You can [download](https://gallery.technet.microsoft.com/Overview-of-Azure-c1be3942 "Download") certifications for your own audits and security reviews. To understand privacy and data management, go to the [Trust Center](https://servicetrust.microsoft.com/ "Trust Center").
-## Support
+## Help and support
-Cognitive Services provides several support options to help you move forward with creating intelligent applications. Cognitive Services also has a strong community of developers that can help answer your specific questions. For a full list of options available to you, see [Cognitive Services support and help options](cognitive-services-support-options.md "Cognitive Services support and help options").
+Cognitive Services provides several support options to help you move forward with creating intelligent applications. Cognitive Services also has a strong community of developers that can help answer your specific questions. For a full list of support options available to you, see [Cognitive Services support and help options](cognitive-services-support-options.md "Cognitive Services support and help options").
## Next steps
-* [Create a Cognitive Services account](cognitive-services-apis-create-account.md "Create a Cognitive Services account")
+* Select a service from the tables above and learn how it can help you meet your development goals.
+* [Create a Cognitive Services resource using the Azure portal](cognitive-services-apis-create-account.md "Create a Cognitive Services account")
* [Plan and manage costs for Cognitive Services](plan-manage-costs.md)
cosmos-db Create Graph Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/create-graph-python.md
In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API
You can also install the Python driver for Gremlin by using the `pip` command line: ```bash
- pip install gremlinpython
+ pip install gremlinpython==3.4.13
``` - [Git](https://git-scm.com/downloads).
In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API
> [!NOTE] > This quickstart requires a graph database account created after December 20, 2017. Existing accounts will support Python once theyΓÇÖre migrated to general availability.
+> [!NOTE]
+> We currently recommend using gremlinpython==3.4.13 with Gremlin (Graph) API as we haven't fully tested all language-specific libraries of version 3.5.* for use with the service.
+ ## Create a database account Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB.
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-manage-consistency.md
client = cosmos_client.CosmosClient(self.account_endpoint, {
One of the consistency levels in Azure Cosmos DB is *Session* consistency. This is the default level applied to Cosmos accounts by default. When working with Session consistency, each new write request to Azure Cosmos DB is assigned a new SessionToken. The CosmosClient will use this token internally with each read/query request to ensure that the set consistency level is maintained.
-In some scenarios you need to manage this Session yourself. Consider a web application with multiple nodes, each node will have its own instance of CosmosClient. If you wanted these nodes to participate in the same session (to be able read your own writes consistently across web tiers) you would have to send the SessionToken from FeedResponse<T> of the write action to the end-user using a cookie or some other mechanism, and have that token flow back to the web tier and ultimately the CosmosClient for subsequent reads. If you are using a round-robin load balancer which does not maintain session affinity between requests, such as the Azure Load Balancer, the read could potentially land on a different node to the write request, where the session was created.
+In some scenarios you need to manage this Session yourself. Consider a web application with multiple nodes, each node will have its own instance of CosmosClient. If you wanted these nodes to participate in the same session (to be able read your own writes consistently across web tiers) you would have to send the SessionToken from FeedResponse\<T\> of the write action to the end-user using a cookie or some other mechanism, and have that token flow back to the web tier and ultimately the CosmosClient for subsequent reads. If you are using a round-robin load balancer which does not maintain session affinity between requests, such as the Azure Load Balancer, the read could potentially land on a different node to the write request, where the session was created.
If you do not flow the Azure Cosmos DB SessionToken across as described above you could end up with inconsistent read results for a period of time.
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
resource subToMG 'Microsoft.Management/managementGroups/subscriptions@2020-05-01
## Limitations of Azure Enterprise subscription creation API - Only Azure Enterprise subscriptions are created using the API.-- There's a limit of 2000 subscriptions per enrollment account. After that, more subscriptions for the account can only be created in the Azure portal. To create more subscriptions through the API, create another enrollment account. Canceled, deleted, and transferred subscriptions count toward the 2000 limit.
+- There's a limit of 5000 subscriptions per enrollment account. After that, more subscriptions for the account can only be created in the Azure portal. To create more subscriptions through the API, create another enrollment account. Canceled, deleted, and transferred subscriptions count toward the 5000 limit.
- Users who aren't Account Owners, but were added to an enrollment account via Azure RBAC, can't create subscriptions in the Azure portal. - You can't select the tenant for the subscription to be created in. The subscription is always created in the home tenant of the Account Owner. To move the subscription to a different tenant, see [change subscription tenant](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md).
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
To see a full list of all parameters, see [az account create](/cli/azure/account
### Limitations of Azure Enterprise subscription creation API - Only Azure Enterprise subscriptions can be created using the API.-- There's a limit of 2000 subscriptions per enrollment account. After that, more subscriptions for the account can only be created in the Azure portal. If you want to create more subscriptions through the API, create another enrollment account. Canceled, deleted, and transferred subscriptions count toward the 2000 limit.
+- There's a limit of 5000 subscriptions per enrollment account. After that, more subscriptions for the account can only be created in the Azure portal. If you want to create more subscriptions through the API, create another enrollment account. Canceled, deleted, and transferred subscriptions count toward the 5000 limit.
- Users who aren't Account Owners, but were added to an enrollment account with Azure RBAC, can't create subscriptions in the Azure portal. - You can't select the tenant for the subscription to be created in. The subscription is always created in the home tenant of the Account Owner. To move the subscription to a different tenant, see [change subscription tenant](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md).
cost-management-billing Track Consumption Commitment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/track-consumption-commitment.md
tags: billing
Previously updated : 10/11/2021 Last updated : 03/02/2022
The Microsoft Azure Consumption Commitment (MACC) is a contractual commitment that your organization may have made to Microsoft Azure spend over time. If your organization has a MACC for a Microsoft Customer Agreement (MCA) billing account or an Enterprise Agreement (EA) billing account you can check important aspects of your commitment, including start and end dates, remaining commitment, and eligible spend in the Azure portal or through REST APIs.
+In the scenario that a MACC commitment has been transacted prior to the expiration or completion of a prior MACC (on the same enrollment/billing account), actual decrement of a commitment will begin upon completion or expiration of the prior commitment. In other words, if you have a new MACC following the expiration or completion of an older MACC on the same enrollment or billing account, use of the new commitment starts when the old commitment expires or is completed.
+ ## Track your MACC Commitment ### [Azure portal](#tab/portal)
cost-management-billing View All Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/view-all-accounts.md
Azure portal supports the following type of billing accounts:
- **Microsoft Online Services Program**: A billing account for a Microsoft Online Services Program is created when you sign up for Azure through the Azure website. For example, when you sign up for an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/), [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or as a [Visual studio subscriber](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/). -- **Enterprise Agreement**: A billing account for an Enterprise Agreement is created when your organization signs an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. You can have a maximum of 2000 subscriptions in an Enterprise Agreement. You can also have an unlimited number of enrollment accounts, effectively allowing an unlimited number of subscriptions.
+- **Enterprise Agreement**: A billing account for an Enterprise Agreement is created when your organization signs an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. You can have a maximum of 5000 subscriptions in an Enterprise Agreement. You can also have an unlimited number of enrollment accounts, effectively allowing an unlimited number of subscriptions.
- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/) may have a billing account for a Microsoft Customer Agreement as well. You can have a maximum of 20 subscriptions in a Microsoft Customer Agreement for an individual. A Microsoft Customer Agreement for an enterprise doesn't have a limit on the number of subscriptions. For more information, see [Get started with your billing account for Microsoft Customer Agreement](../understand/mca-overview.md).
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 02/25/2022 Last updated : 03/02/2022
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
Previously updated : 02/25/2022 Last updated : 03/02/2022 # Source transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Dynamics 365](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Dynamics CRM](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Hive](connector-hive.md#mapping-data-flow-properties) | | -/Γ£ô |
+| [Quickbase (Preview)](connector-quickbase.md#mapping-data-flow-properties) | | -/Γ£ô |
+| [Smartsheet (Preview)](connector-smartsheet.md#mapping-data-flow-properties) | | -/Γ£ô |
| [Snowflake](connector-snowflake.md) | | Γ£ô/Γ£ô | | [SQL Server](connector-sql-server.md) | | Γ£ô/Γ£ô | | [REST](connector-rest.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [TeamDesk (Preview)](connector-teamdesk.md#mapping-data-flow-properties) | | -/Γ£ô |
+| [Zendesk (Preview)](connector-zendesk.md#mapping-data-flow-properties) | | -/Γ£ô |
Settings specific to these connectors are located on the **Source options** tab. Information and data flow script examples on these settings are located in the connector documentation.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| **Command within a container running with high privileges (Preview)**<br>(K8S.NODE_PrivilegedExecutionInContainer) | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low | | **Container running in privileged mode (Preview)**<br>(K8S.NODE_PrivilegedContainerArtifacts) | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to the host's resources. If compromised, an attacker can use the privileged container to gain access to the host machine. | PrivilegeEscalation, Execution | Low | | **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium |
-| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low |
+| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[1](#footnote1)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low |
| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low | | **Detected file download from a known malicious source (Preview)**<br>(K8S.NODE_SuspectDownload) | Analysis of processes running within a container detected download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium | | **Detected Persistence Attempt (Preview)**<br>(K8S.NODE_NewSingleUserModeStartupScript) | Analysis of processes running within a container detected installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode so it may indicate an attacker has added a malicious process to every run-level to guarantee persistence. | Persistence | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low | | **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) | Analysis of processes running within a container detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium | | **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
-| **Kubernetes events deleted**<br>(K8S_DeleteEvents) | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
+| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[1](#footnote1)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low | | **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium | | **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **A possible connection to malicious location has been detected. (Preview)**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) | Analysis of processes running within a container detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occured. | InitialAccess | Medium | | | | | | -
+<sup><a name="footnote1"></a>1</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, are not supported for GKE clusters.
+
## <a name="alerts-sql-db-and-warehouse"></a>Alerts for SQL Database and Azure Synapse Analytics [Further details and notes](defender-for-sql-introduction.md)
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Title: Archive of what's new in Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud from six months ago and earlier. Previously updated : 02/27/2022 Last updated : 03/02/2022 # Archive for what's new in Defender for Cloud?
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## September 2021
+
+In September, the following update was released:
+
+### Two new recommendations to audit OS configurations for Azure security baseline compliance (in preview)
+
+The following two recommendations have been released to assess your machines' compliance with the [Windows security baseline](../governance/policy/samples/guest-configuration-baseline-windows.md) and the [Linux security baseline](../governance/policy/samples/guest-configuration-baseline-linux.md):
+
+- For Windows machines, [Vulnerabilities in security configuration on your Windows machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1f655fb7-63ca-4980-91a3-56dbc2b715c6)
+- For Linux machines, [Vulnerabilities in security configuration on your Linux machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3d9ad0-3639-4686-9cd2-2b2ab2609bda)
+
+These recommendations make use of the guest configuration feature of Azure Policy to compare the OS configuration of a machine with the baseline defined in the [Azure Security Benchmark](/security/benchmark/azure/overview).
+
+Learn more about using these recommendations in [Harden a machine's OS configuration using guest configuration](apply-security-baseline.md).
+ ## August 2021 Updates in August include:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 03/01/2022 Last updated : 03/02/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## March 2022
+
+Updates in March include:
+
+- [Deprecated the recommendations to install the network traffic data collection agent](#deprecated-the-recommendations-to-install-the-network-traffic-data-collection-agent)
+
+### Deprecated the recommendations to install the network traffic data collection agent
+
+Changes in our roadmap and priorities have removed the need for the network traffic data collection agent. Consequently, the following two recommendations and their related policies were deprecated.
+
+|Recommendation |Description |Severity |
+||||
+|[Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3e93d3-0276-4d06-b20a-9a9f3012742c) |Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |Medium |
+|[Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24d8af06-d441-40b4-a49c-311421aa9f58) |Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations, and specific network threats. |Medium |
+|||
+ ## February 2022 Updates in February include: -- [Kubernetes workload protection for Arc enabled K8s clusters](#kubernetes-workload-protection-for-arc-enabled-k8s-clusters)
+- [Kubernetes workload protection for Arc-enabled Kubernetes clusters](#kubernetes-workload-protection-for-arc-enabled-kubernetes-clusters)
- [Native CSPM for GCP and threat protection for GCP compute instances](#native-cspm-for-gcp-and-threat-protection-for-gcp-compute-instances) - [Microsoft Defender for Azure Cosmos DB plan released for preview](#microsoft-defender-for-azure-cosmos-db-plan-released-for-preview) - [Threat protection for Google Kubernetes Engine (GKE) clusters](#threat-protection-for-google-kubernetes-engine-gke-clusters)
-### Kubernetes workload protection for Arc enabled K8s clusters
+### Kubernetes workload protection for Arc-enabled Kubernetes clusters
-Defender for Containers for Kubernetes workloads previously only protected AKS. We've now extended the protective coverage to include Azure Arc enabled Kubernetes clusters.
+Defender for Containers previously only protected Kubernetes workloads running in Azure Kubernetes Service. We've now extended the protective coverage to include Azure Arc-enabled Kubernetes clusters.
Learn how to [set up your Kubernetes workload protection](kubernetes-workload-protections.md#set-up-your-workload-protection) for AKS and Azure Arc enabled Kubernetes clusters.
These alerts are generated based on a new machine learning model and Kubernetes
||| For a full list of the Kubernetes alerts, see [Alerts for Kubernetes clusters](alerts-reference.md#alerts-k8scluster).-
-## September 2021
-
-In September, the following update was released:
-
-### Two new recommendations to audit OS configurations for Azure security baseline compliance (in preview)
-
-The following two recommendations have been released to assess your machines' compliance with the [Windows security baseline](../governance/policy/samples/guest-configuration-baseline-windows.md) and the [Linux security baseline](../governance/policy/samples/guest-configuration-baseline-linux.md):
--- For Windows machines, [Vulnerabilities in security configuration on your Windows machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1f655fb7-63ca-4980-91a3-56dbc2b715c6)-- For Linux machines, [Vulnerabilities in security configuration on your Linux machines should be remediated (powered by Guest Configuration)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3d9ad0-3639-4686-9cd2-2b2ab2609bda)-
-These recommendations make use of the guest configuration feature of Azure Policy to compare the OS configuration of a machine with the baseline defined in the [Azure Security Benchmark](/security/benchmark/azure/overview).
-
-Learn more about using these recommendations in [Harden a machine's OS configuration using guest configuration](apply-security-baseline.md).
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | February 2022 | | [Deprecating the recommendation to use service principals to protect your subscriptions](#deprecating-the-recommendation-to-use-service-principals-to-protect-your-subscriptions) | February 2022 | | [Moving recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices](#moving-recommendation-vulnerabilities-in-container-security-configurations-should-be-remediated-from-the-secure-score-to-best-practices) | February 2022 |
-| [Deprecating the recommendations to install the network traffic data collection agent](#deprecating-the-recommendations-to-install-the-network-traffic-data-collection-agent) | February 2022 |
| [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | March 2022 | | [AWS recommendations to GA](#aws-recommendations-to-ga) | March 2022 | | [Relocation of custom recommendations](#relocation-of-custom-recommendations) | March 2022 |
Learn more:
- [Overview of Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) - [Workflow of Windows Azure classic VM Architecture - including RDFE workflow basics](../cloud-services/cloud-services-workflow-process.md) -
-### Deprecating the recommendations to install the network traffic data collection agent
-
-**Estimated date for change:** February 2022
-
-Changes in our roadmap and priorities have removed the need for the network traffic data collection agent. Consequently, we'll be deprecating the following two recommendations and their related policies.
-
-|Recommendation |Description |Severity |
-||||
-|[Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3e93d3-0276-4d06-b20a-9a9f3012742c) |Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats.<br />(Related policy: [Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f04c4380f-3fae-46e8-96c9-30193528f602)) |Medium |
-|[Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24d8af06-d441-40b4-a49c-311421aa9f58) |Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations, and specific network threats.<br />(Related policy: [Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2f2ee1de-44aa-4762-b6bd-0893fc3f306d)) |Medium |
-|||
- ### Moving recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices **Estimated date for change:** February 2022
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/release-notes.md
Listed below are the support, breaking change policies for Defender for IoT, and
## February 2022
-**Version 4.1.1**:
+**Version 4.1.2**:
- **Micro agent for Edge is now in Public Preview**: The micro-agent supports IoT Edge devices, with an easy installation and identity provisioning process that uses an automatically provisioned module identity to authenticate Edge devices without the need to perform any manual authentication.
Listed below are the support, breaking change policies for Defender for IoT, and
- **New directory structure**: Now aligned with the standard Linux installation directory structure.
- Due to this change, updates to version 4.1.1 require you to reauthenticate the micro agent and save your connection string in the new location. For more information, see [Upgrade the Microsoft Defender for IoT micro agent](upgrade-micro-agent.md).
+ Due to this change, updates to version 4.1.2 require you to reauthenticate the micro agent and save your connection string in the new location. For more information, see [Upgrade the Microsoft Defender for IoT micro agent](upgrade-micro-agent.md).
- **SBoM collector**: The SBoM collector now collects the packages installed on the device periodically. For more information, see [Micro agent event collection (Preview)](concept-event-aggregation.md).
defender-for-iot Upgrade Micro Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/upgrade-micro-agent.md
For more information, see our [release notes for device builders](release-notes.
## Upgrade a standalone micro agent from a legacy version
-This section is relevant specifically when upgrading a micro agent from version 3.13.1 or lower to version 4.1.1 or higher.
+This section is relevant specifically when upgrading a micro agent from version 3.13.1 or lower to version 4.1.2 or higher.
-In version 4.1.1, the standalone micro agent directory changed to align with standard Linux installation directory structures. This change requires customers to reauthenticate the micro agent and modify the connection string location.
+In version 4.1.2, the standalone micro agent directory changed to align with standard Linux installation directory structures. This change requires customers to reauthenticate the micro agent and modify the connection string location.
1. Upgrade your micro agent as described [above](#upgrade-a-standalone-micro-agent).
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md
Activation and setup of the on-premises management console ensures that: - Network devices that you're monitoring through connected sensors are registered with an Azure account.- - Sensors send information to the on-premises management console.- - The on-premises management console carries out management tasks on connected sensors.--- You have installed an SSL certificate.
+- You've installed an SSL certificate.
## Sign in for the first time
-**To sign in to the on-premises management console:**
+To sign in to the on-premises management console:
-1. Navigate to the IP address you received for the on-premises management console during the system installation.
+1. Go to the IP address you received for the on-premises management console during the system installation.
1. Enter the username and password you received for the on-premises management console during the system installation.
-If you forgot your password, select the **Recover Password** option, and see [Password recovery](how-to-manage-the-on-premises-management-console.md#password-recovery) for instructions on how to recover your password.
+If you forgot your password, select the **Recover Password** option. See [Password recovery](how-to-manage-the-on-premises-management-console.md#password-recovery) for instructions on how to recover your password.
## Activate the on-premises management console
-After you sign in for the first time, you will need to activate the on-premises management console by getting, and uploading an activation file.
+After you sign in for the first time, you need to activate the on-premises management console by getting and uploading an activation file.
-**To activate the on-premises management console:**
+To activate the on-premises management console:
1. Sign in to the on-premises management console.
-1. In the alert notification at the top of the screen, select the **Take Action** link.
+1. In the alert notification at the top of the screen, select **Take Action**.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/take-action.png" alt-text="Select the Take Action link from the alert on the top of the screen.":::
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/take-action.png" alt-text="Screenshot that shows the Take Action link in the alert at the top of the screen.":::
-1. In the Activation popup screen, select the **Azure portal** link.
+1. In the **Activation** pop-up screen, select **Azure portal**.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/azure-portal.png" alt-text="Select the Azure portal link from the popup message.":::
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/azure-portal.png" alt-text="Screenshot that shows the Azure portal link in the pop-up message.":::
-1. Select a subscription to associate the on-premises management console to, and then select the **Download on-premises management console activation file** button. The activation file is downloaded.
+1. Select a subscription to associate the on-premises management console to. Then select **Download on-premises management console activation file**. The activation file downloads.
- The on-premises management console can be associated to one, or more subscriptions. The activation file will be associated with all of the selected subscriptions, and the number of committed devices at the time of download.
+ The on-premises management console can be associated to one or more subscriptions. The activation file is associated with all the selected subscriptions and the number of committed devices at the time of download.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png" alt-text="You can select multiple subscriptions to onboard your on-premises management console to." lightbox="media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png":::
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png" alt-text="Screenshot that shows selecting multiple subscriptions." lightbox="media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png":::
- If you have not already onboarded a subscription, then [Onboard a subscription](how-to-manage-subscriptions.md#onboard-a-subscription).
+ If you haven't already onboarded a subscription, see [Onboard a subscription](how-to-manage-subscriptions.md#onboard-a-subscription).
> [!Note]
- > If you delete a subscription, you will need to upload a new activation file to all on-premises management console that was affiliated with the deleted subscription.
+ > If you delete a subscription, you must upload a new activation file to the on-premises management console that was affiliated with the deleted subscription.
-1. Navigate back to the **Activation** popup screen and select **Choose File**.
+1. Go back to the **Activation** pop-up screen and select **CHOOSE FILE**.
1. Select the downloaded file.
-After initial activation, the number of monitored devices can exceed the number of committed devices defined during onboarding. This issue occurs if you connect more sensors to the management console. If there's a discrepancy between the number of monitored devices, and the number of committed devices, a warning will appear on the management console.
+After initial activation, the number of monitored devices might exceed the number of committed devices defined during onboarding. This issue occurs if you connect more sensors to the management console. If there's a discrepancy between the number of monitored devices and the number of committed devices, a warning appears on the management console.
If this warning appears, you need to upload a [new activation file](#activate-the-on-premises-management-console). ### Activate an expired license (versions under 10.0)
-For users with versions prior to 10.0, your license may expire, and the following alert will be displayed.
+For users with versions prior to 10.0, your license might expire and the following alert will appear:
-**To activate your license:**
+To activate your license:
1. Open a case with [support](https://portal.azure.com/?passwordRecovery=true&Microsoft_Azure_IoT_Defender=canary#create/Microsoft.Support).
-1. Supply support with your Activation ID number.
+1. Supply support with your **Activation ID** number.
1. Support will supply you with new license information in the form of a string of letters.
-1. Read the terms and conditions, and check the checkbox to approve.
+1. Read the terms and conditions, and select the checkbox to approve.
-1. Paste the string into space provided.
+1. Paste the string into the space provided.
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/add-license.png" alt-text="Paste the string into the provided field.":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/add-license.png" alt-text="Screenshot that shows pasting the string into the box.":::
1. Select **Activate**. ## Set up a certificate
-After you install the management console, a local self-signed certificate is generated. This certificate is used to access the console. After an administrator signs in to the management console for the first time, that user is prompted to onboard an SSL/TLS certificate.
+After you install the management console, a local self-signed certificate is generated. This certificate is used to access the console. After an administrator signs in to the management console for the first time, that user is prompted to onboard an SSL/TLS certificate.
Two levels of security are available: - Meet specific certificate and encryption requirements requested by your organization by uploading the CA-signed certificate.--- Allow validation between the management console and connected sensors. Validation is evaluated against a certificate revocation list and the certificate expiration date. *If validation fails, communication between the management console and the sensor is halted and a validation error is presented in the console.* This option is enabled by default after installation.
+- Allow validation between the management console and connected sensors. Validation is evaluated against a certificate revocation list and the certificate expiration date. *If validation fails, communication between the management console and the sensor is halted and a validation error is presented in the console.* This option is enabled by default after installation.
The console supports the following types of certificates: - Private and Enterprise Key Infrastructure (private PKI)- - Public Key Infrastructure (public PKI)--- Locally generated on the appliance (locally self-signed)
+- Locally generated on the appliance (locally self-signed)
> [!IMPORTANT]
- > We recommend that you don't use a self-signed certificate. The certificate is not secure and should be used for test environments only. The owner of the certificate can't be validated, and the security of your system can't be maintained. Never use this option for production networks.
+ > We recommend that you don't use a self-signed certificate. The certificate isn't secure and should be used for test environments only. The owner of the certificate can't be validated, and the security of your system can't be maintained. Never use this option for production networks.
-**To upload a certificate:**
+To upload a certificate:
-1. When you're prompted after sign-in, define a certificate name.
+1. When you're prompted after you sign in, define a certificate name.
1. Upload the CRT and key files. 1. Enter a passphrase and upload a PEM file if necessary.
-You may need to refresh your screen after you upload the CA-signed certificate.
+You might need to refresh your screen after you upload the CA-signed certificate.
-**To disable validation between the management console and connected sensors:**
+To disable validation between the management console and connected sensors:
1. Select **Next**.
For information about uploading a new certificate, supported certificate files,
## Connect sensors to the on-premises management console
-Ensure that sensors send information to the on-premises management console, and that the on-premises management console can perform backups, manage alerts, and carry out other activity on the sensors. To do that, use the following procedures to verify that you make an initial connection between sensors and the on-premises management console.
+Ensure that sensors send information to the on-premises management console. Make sure that the on-premises management console can perform backups, manage alerts, and carry out other activity on the sensors. Use the following procedures to verify that you make an initial connection between sensors and the on-premises management console.
Two options are available for connecting Microsoft Defender for IoT sensors to the on-premises management console: -- Connect from the sensor console--- Connect by using tunneling
+- Connect from the sensor console.
+- Connect by using tunneling.
After connecting, you must set up a site with these sensors. ### Connect sensors to the on-premises management console from the sensor console
-**To connect sensors to the on-premises management console from the sensor console:**
+To connect sensors to the on-premises management console from the sensor console:
1. On the on-premises management console, select **System Settings**.
-1. Copy the **Copy Connection String**.
+1. Copy the string in the **Copy Connection String** box.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/connection-string.png" alt-text="Copy the connection string for the sensor.":::
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/connection-string.png" alt-text="Screenshot that shows copying the connection string for the sensor.":::
-1. On the sensor, navigate to **System Settings** and select **Connection to Management Console** :::image type="icon" source="media/how-to-manage-sensors-from-the-on-premises-management-console/connection-to-management-console.png" border="false":::
+1. On the sensor, go to **System Settings** and select **Connection to Management Console** :::image type="icon" source="media/how-to-manage-sensors-from-the-on-premises-management-console/connection-to-management-console.png" border="false":::
-1. Paste the copied connection string from the on-premises management console into the **Connection string** field.
+1. Paste the copied connection string from the on-premises management console into the **Connection string** box.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/paste-connection-string.png" alt-text="Paste the copied connection string into the connection string field.":::
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/paste-connection-string.png" alt-text="Screenshot that shows pasting the copied connection string into the Connection string box.":::
1. Select **Connect**. ### Connect sensors by using tunneling
-Enable a secured tunneling connection between organizational sensors and the on-premises management console. This setup circumvents interaction with the organizational firewall, and as a result reduces the attack surface.
+Enable a secured tunneling connection between organizational sensors and the on-premises management console. This setup circumvents interaction with the organizational firewall. As a result, it reduces the attack surface.
Using tunneling allows you to connect to the on-premises management console from its IP address and a single port (9000 by default) to any sensor.
-**To set up tunneling at the on-premises management console:**
+To set up tunneling at the on-premises management console:
-- Sign in to the on-premises management console and run the following commands:
+1. Sign in to the on-premises management console and run the following command:
- ```bash
- cyberx-management-tunnel-enable
-
- ```
+ ```bash
+ cyberx-management-tunnel-enable
+
+ ```
-Allow a few minutes for the connection to initiate.
+1. Allow a few minutes for the connection to start.
-You can also customize the port range to a number other than 9000 for example. 10000.
+You can also customize the port range to a number other than 9000. An example is 10000.
-**To use a new port:**
+To use a new port:
-- Sign in to the on-premises management console and run the following command:
+1. Sign in to the on-premises management console and run the following command:
- ```bash
- sudo cyberx-management-tunnel-enable --port 10000
-
- ```
+ ```bash
+ sudo cyberx-management-tunnel-enable --port 10000
+
+ ```
-Disable the connection, when required.
+1. Disable the connection, when required.
-**To disable:**
+To disable:
-- Sign in to the on-premises management console and run the following command:
+Sign in to the on-premises management console and run the following command:
```bash cyberx-management-tunnel-disable
Disable the connection, when required.
No configuration is needed on the sensor.
-**Log files**
+To view log files:
Review log information in the log files.
-**To access log files:**
+To access log files:
-1. Log into the On-premises management console and go to: /var/log/apache2.log
-1. Log into the the sensor and go to: /var/cyberx/logs/tunnel.log
+1. Sign in to the on-premises management console and go to */var/log/apache2.log*.
+1. Sign in to the sensor and go to */var/cyberx/logs/tunnel.log*.
## Set up a site
The default enterprise map provides an overall view of your devices according to
The view of your devices might be required where the organizational structure and user permissions are complex. In these cases, site setup might be determined by a global organizational structure, in addition to the standard site or zone structure.
-To support this environment, you need to create a global business topology that's based on your organization's business units, regions, sites, and zones. You also need to define user access permissions around these entities by using access groups.
+To support this environment, you must create a global business topology based on your organization's business units, regions, sites, and zones. You also need to define user access permissions around these entities by using access groups.
Access groups enable better control over where users manage and analyze devices in the Defender for IoT platform. ### How it works
-You can define a business unit, and a region for each site in your organization. You can then add zones, which are logical entities that exist in your network.
+You can define a business unit and a region for each site in your organization. You can then add zones, which are logical entities that exist in your network.
Assign at least one sensor per zone. The five-level model provides the flexibility and granularity required to deliver the protection system that reflects the structure of your organization.
-Using the Enterprise View, you can edit your sites directly. When you select a site from the Enterprise View, the number of open alerts appears next to each zone.
+By using the **Enterprise View** screen, you can edit your sites directly. When you select a site on the **Enterprise View** screen, the number of open alerts appears next to each zone.
-**To set up a site:**
+To set up a site:
1. Add new business units to reflect your organization's logical structure.
- 1. From the Enterprise view, select **All Sites** > **Manage Business Units**.
+ 1. On the **Enterprise View** screen, select **All Sites** > **Manage Business Units**.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/manage-business-unit.png" alt-text="Select manage business unit from the all sites drop down menu on the enterprise view screen.":::
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/manage-business-unit.png" alt-text="Screenshot that shows selecting Manage Business Units from the All Sites dropdown menu on the Enterprise View screen.":::
1. Enter the new business unit name and select **ADD**. 1. Add new regions to reflect your organization's regions.
- 1. From the Enterprise View, select **All Regions** > **Manage Regions**.
+ 1. On the **Enterprise View** screen, select **All Regions** > **Manage Regions**.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/manage-regions.png" alt-text="Select all regions and then manage regions to manage the regions in your enterprise.":::
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/manage-regions.png" alt-text="Screenshot that shows selecting All Regions and then selecting Manage Regions to manage the regions in your enterprise.":::
1. Enter the new region name and select **ADD**. 1. Add a site.
- 1. From the Enterprise view, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/new-site-icon.png" border="false"::: on the top bar. Your cursor appears as a plus sign (**+**).
+ 1. On the **Enterprise View** screen, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/new-site-icon.png" border="false"::: on the top bar. Your cursor appears as a plus sign (**+**).
- 1. Position the **+** at the location of the new site and select it. The **Create New Site** dialog box opens.
+ 1. Position the **+** at the location of the new site and select it. The **Create New Site** dialog opens.
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/create-new-site-screen.png" alt-text="Screenshot of the Create New Site view.":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/create-new-site-screen.png" alt-text="Screenshot that shows the Create New Site dialog.":::
1. Define the name and the physical address for the new site and select **SAVE**. The new site appears on the site map.
-4. [Add zones to a site](#create-enterprise-zones).
+1. [Add zones to a site](#create-enterprise-zones).
-5. [Connect the sensors](how-to-manage-individual-sensors.md#connect-a-sensor-to-the-management-console).
+1. [Connect the sensors](how-to-manage-individual-sensors.md#connect-a-sensor-to-the-management-console).
-6. [Assign sensor to site zones](#assign-sensors-to-zones).
+1. [Assign sensors to site zones](#assign-sensors-to-zones).
### Delete a site If you no longer need a site, you can delete it from your on-premises management console.
-**To delete a site:**
+To delete a site:
-1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: from the bar that contains the site name, and then select **Delete Site**. The confirmation box appears, verifying that you want to delete the site.
+1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: from the bar that contains the site name. Then select **Delete Site**. A confirmation box appears where you can verify that you want to delete the site.
-2. In the confirmation box, select **CONFIRM**.
+1. In the confirmation box, select **CONFIRM**.
## Create enterprise zones
Zones are logical entities that enable you to divide devices within a site into
You configure zones as a part of the site configuration process. The following table describes the parameters in the **Site Management** window.
The following table describes the parameters in the **Site Management** window.
| IP | The sensor IP address. | | Version | The sensor version. | | Connectivity | The sensor connectivity status. The status can be **Connected** or **Disconnected**. |
-| Last Upgrade | The date of the last upgrade. |
-| Upgrade Progress | The progress bar shows the status of the upgrade process, as follows:<br />- Uploading package<br />- Preparing to install<br />- Stopping processes<br />- Backing up data<br />- Taking snapshot<br />- Updating configuration<br />- Updating dependencies<br />- Updating libraries<br />- Patching databases<br />- Starting processes<br />- Validating system sanity<br />- Validation succeeded<br />- Success<br />- Failure<br />- Upgrade started<br />- Starting installation<br /></br >For details about upgrading, refer to [Microsoft Support](https://support.microsoft.com/) for help. |
+| Last Update | The date of the last update. |
+| Update Progress | The progress bar shows the status of the update process, as follows:<br />- Uploading package<br />- Preparing to install<br />- Stopping processes<br />- Backing up data<br />- Taking snapshot<br />- Updating configuration<br />- Updating dependencies<br />- Updating libraries<br />- Patching databases<br />- Starting processes<br />- Validating system sanity<br />- Validation succeeded<br />- Success<br />- Failure<br />- Update started<br />- Starting installation<br /></br >For details about updating, see [Microsoft Support](https://support.microsoft.com/) for help. |
| Devices | The number of OT devices that the sensor monitors. | | Alerts | The number of alerts on the sensor. | | :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/assign-icon.png" border="false"::: | Enables assigning a sensor to zones. |
The following table describes the parameters in the **Site Management** window.
| :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/number-of-alerts-icon.png" border="false"::: | Indicates the number of alerts sent by sensors that are assigned to the zone. | | :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/unassign-sensor-icon.png" border="false"::: | Unassigns sensors from zones. |
-**To add a zone to a site:**
+To add a zone to a site:
-1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: from the bar that contains the site name, and then select **Add Zone**. The **Create New Zone** dialog box appears.
+1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: on the bar that contains the site name. Then select **Add Zone**. The **Create New Zone** dialog appears.
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/create-new-zone-screen.png" alt-text="Screenshot of the Create New Zone view.":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/create-new-zone-screen.png" alt-text="Screenshot that shows the Create New Zone view.":::
1. Enter the zone name.
The following table describes the parameters in the **Site Management** window.
1. Select **SAVE**. The new zone appears in the **Site Management** window under the site that this zone belongs to.
-**To edit a zone:**
+To edit a zone:
-1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: from the bar that contains the zone name, and then select **Edit Zone**. The **Edit Zone** dialog box appears.
+1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: on the bar that contains the zone name. Then select **Edit Zone**. The **Edit Zone** dialog appears.
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/zone-edit-screen.png" alt-text="Screenshot that shows the Edit Zone dialog box.":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/zone-edit-screen.png" alt-text="Screenshot that shows the Edit Zone dialog.":::
1. Edit the zone parameters and select **SAVE**.
-**To delete a zone:**
+To delete a zone:
-1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: from the bar that contains the zone name, and then select **Delete Zone**.
+1. In the **Site Management** window, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/expand-view-icon.png" border="false"::: on the bar that contains the zone name. Then select **Delete Zone**.
1. In the confirmation box, select **YES**.
-**To filter according to the connectivity status:**
+To filter according to the connectivity status:
-- From the upper-left corner, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/down-pointing-icon.png" border="false"::: next to **Connectivity**, and then select one of the following options:
+- In the upper-left corner, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/down-pointing-icon.png" border="false"::: next to **Connectivity**. Then select one of the following options:
- **All**: Presents all the sensors that report to this on-premises management console.- - **Connected**: Presents only connected sensors.- - **Disconnected**: Presents only disconnected sensors.
-**To filter according to the upgrade status:**
+To filter according to the upgrade status:
-- From the upper-left corner, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/down-pointing-icon.png" border="false"::: next to **Upgrade Status** and select one of the following options:
+- In the upper-left corner, select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/down-pointing-icon.png" border="false"::: next to **Upgrade Status**. Select one of the following options:
- **All**: Presents all the sensors that report to this on-premises management console.- - **Valid**: Presents sensors with a valid upgrade status.- - **In Progress**: Presents sensors that are in the process of upgrade.- - **Failed**: Presents sensors whose upgrade process has failed. ## Assign sensors to zones For each zone, you need to assign sensors that perform local traffic analysis and alerting. You can assign only the sensors that are connected to the on-premises management console.
-**To assign a sensor:**
+To assign a sensor:
-1. Select **Site Management**. The unassigned sensors appear in the upper-left corner of the dialog box.
+1. Select **Site Management**. The unassigned sensors appear in the upper-left corner of the dialog.
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/unassigned-sensors-view.png" alt-text="Screenshot of the Unassigned Sensors view.":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/unassigned-sensors-view.png" alt-text="Screenshot that shows the Unassigned Sensors view.":::
-1. Verify that the **Connectivity** status is connected. If not, see [Connect sensors to the on-premises management console](#connect-sensors-to-the-on-premises-management-console) for details about connecting.
+1. Verify that the **Connectivity** status is **Connected**. If it's not, see [Connect sensors to the on-premises management console](#connect-sensors-to-the-on-premises-management-console) for more information about connecting.
1. Select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/assign-icon.png" border="false"::: for the sensor that you want to assign.
-1. In the **Assign Sensor** dialog box, select the business unit, region, site, and zone to assign.
+1. In the **Assign Sensor** dialog, select the business unit, region, site, and zone to assign.
- :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/assign-sensor-screen.png" alt-text="Screenshot of the Assign Sensor view.":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/assign-sensor-screen.png" alt-text="Screenshot that shows the Assign Sensor view.":::
1. Select **ASSIGN**.
-**To unassign and delete a sensor:**
+To unassign and delete a sensor:
-1. Disconnect the sensor from the on-premises management console. See [Connect sensors to the on-premises management console](#connect-sensors-to-the-on-premises-management-console) for details.
+1. Disconnect the sensor from the on-premises management console. See [Connect sensors to the on-premises management console](#connect-sensors-to-the-on-premises-management-console) for more information.
1. In the **Site Management** window, select the sensor and select :::image type="icon" source="media/how-to-activate-and-set-up-your-on-premises-management-console/unassign-sensor-icon.png" border="false":::. The sensor appears in the list of unassigned sensors after a few moments.
digital-twins Reference Query Clause Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-query-clause-join.md
Graph traversal depth is restricted to five `JOIN` levels per query.
#### Example
-The following query illustrates the maximum number of `JOIN` clauses that are possible in an Azure Digital Twins query. It gets all the LightBulbs in Buliding1.
+The following query illustrates the maximum number of `JOIN` clauses that are possible in an Azure Digital Twins query. It gets all the LightBulbs in Building1.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="MaxJoinExample":::
event-grid Event Schema Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-container-registry.md
The following example shows the schema of an image pushed event:
"id": "831e1650-001e-001b-66ab-eeb76e069631", "topic": "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/<name>", "subject": "aci-helloworld:v1",
- "eventType": "ImagePushed",
+ "eventType": "Microsoft.ContainerRegistry.ImagePushed",
"eventTime": "2018-04-25T21:39:47.6549614Z", "data": { "id": "31c51664-e5bd-416a-a5df-e5206bc47ed0", "timestamp": "2018-04-25T21:39:47.276585742Z", "action": "push",
+ "location": "westus",
"target": { "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "size": 3023,
The following example shows the schema of an image pushed event:
"host": "demo.azurecr.io", "method": "PUT", "useragent": "docker/18.03.0-ce go/go1.9.4 git-commit/0520e24 os/windows arch/amd64 UpstreamClient(Docker-Client/18.03.0-ce \\\\(windows\\\\))"
- }
+ },
+ "connectedRegistry": {
+ "name": "edge1"
+ }
},
- "dataVersion": "1.0",
+ "dataVersion": "2.0",
"metadataVersion": "1" }] ```
The schema for an image deleted event is similar:
"id": "f06e3921-301f-42ec-b368-212f7d5354bd", "topic": "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/<name>", "subject": "aci-helloworld",
- "eventType": "ImageDeleted",
+ "eventType": "Microsoft.ContainerRegistry.ImageDeleted",
"eventTime": "2018-04-26T17:56:01.8211268Z", "data": { "id": "f06e3921-301f-42ec-b368-212f7d5354bd", "timestamp": "2018-04-26T17:56:00.996603117Z", "action": "delete",
+ "location": "westus",
"target": { "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "digest": "sha256:213bbc182920ab41e18edc2001e06abcca6735d87782d9cef68abd83941cf0e5",
The schema for an image deleted event is similar:
"host": "demo.azurecr.io", "method": "DELETE", "useragent": "python-requests/2.18.4"
- }
+ },
+ "connectedRegistry": {
+ "name": "edge1"
+ }
},
- "dataVersion": "1.0",
+ "dataVersion": "2.0",
"metadataVersion": "1" }] ```
The schema for a chart pushed event is similar to the schema for an imaged pushe
"id":"ea3a9c28-5b17-40f6-a500-3f02b682927", "timestamp":"2019-03-12T22:16:31.0087496+00:00", "action":"chart_push",
+ "location": "westus",
"target":{ "mediaType":"application/vnd.acr.helm.chart", "size":25265,
The schema for a chart pushed event is similar to the schema for an imaged pushe
"tag":"mychart-1.0.0.tgz", "name":"mychart", "version":"1.0.0"
- }
+ },
+ "connectedRegistry": {
+ "name": "edge1"
+ }
},
- "dataVersion": "1.0",
+ "dataVersion": "2.0",
"metadataVersion": "1" }] ```
The schema for a chart deleted event is similar to the schema for an imaged dele
"id":"ea3a9c28-5b17-40f6-a500-3f02b682927", "timestamp":"2019-03-12T22:42:08.3783775+00:00", "action":"chart_delete",
+ "location": "westus",
"target":{ "mediaType":"application/vnd.acr.helm.chart", "size":25265,
The schema for a chart deleted event is similar to the schema for an imaged dele
"tag":"mychart-1.0.0.tgz", "name":"mychart", "version":"1.0.0"
- }
+ },
+ "connectedRegistry": {
+ "name": "edge1"
+ }
},
- "dataVersion": "1.0",
+ "dataVersion": "2.0",
"metadataVersion": "1" }] ```
The following example shows the schema of an image pushed event:
"id": "831e1650-001e-001b-66ab-eeb76e069631", "source": "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/<name>", "subject": "aci-helloworld:v1",
- "type": "ImagePushed",
+ "type": "Microsoft.ContainerRegistry.ImagePushed",
"time": "2018-04-25T21:39:47.6549614Z", "data": { "id": "31c51664-e5bd-416a-a5df-e5206bc47ed0",
The schema for an image deleted event is similar:
"id": "f06e3921-301f-42ec-b368-212f7d5354bd", "source": "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/<name>", "subject": "aci-helloworld",
- "type": "ImageDeleted",
+ "type": "Microsoft.ContainerRegistry.ImageDeleted",
"time": "2018-04-26T17:56:01.8211268Z", "data": { "id": "f06e3921-301f-42ec-b368-212f7d5354bd",
An event has the following top-level data:
| `eventType` | string | One of the registered event types for this event source. | | `eventTime` | string | The time the event is generated based on the provider's UTC time. | | `id` | string | Unique identifier for the event. |
+| `location` | string | The location of the event. |
+| `connectedRegistry` | object | The connected registry information if the event is generated by a connected registry. |
| `data` | object | Blob storage event data. | | `dataVersion` | string | The schema version of the data object. The publisher defines the schema version. | | `metadataVersion` | string | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. |
The request object has the following properties:
| `method` | string | The request method that generated the event. | | `useragent` | string | The user agent header of the request. |
+The connectedRegistry object has the following properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `name` | string | The name of the connected registry that generated this event. |
+ ## Tutorials and how-tos |Title |Description | |||
frontdoor Front Door Routing Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-routing-limits.md
+
+ Title: Azure Front Door - routing limits | Microsoft Docs
+description: This article helps you understand the composite limits around routing for Azure Front Door.
+
+documentationcenter: ''
+++
+ na
+ Last updated : 02/27/2022+++
+# Front Door routing limits
+
+Each Front Door profile has a *composite route limit*.
+
+Your Front Door profile's composite route metric is derived from the number of routes, as well as the front end domains, protocols, and paths associated with that route.
+
+The composite route metric for each Front Door profile can't exceed 5000.
+
+> [!TIP]
+> Most Front Door profiles don't approach the composite route limit. However, if you have a large Front Door profiles, consider whether you could exceed the limit and plan accordingly.
+
+## Calculate your profile's composite route metric
+
+Follow these steps to calculate the composite route metric for your Front Door profile:
+
+1. Select a route from your profile.
+ 1. Multiply the number of HTTP domains by the number of HTTP paths.
+ 1. Multiply the number of HTTPS domains by the number of HTTPS paths.
+ 1. Add the results of steps 1a and 1b together to give the composite route metric for this individual route.
+1. Repeat these steps for each route in your profile.
+
+Add together all of the composite route metrics for each route. This is your profile's composite route metric.
+
+### Example
+
+Suppose you have have two routes in your Front Door profile. The routes are named *Route 1* and *Route 2*. You plan to configure the routes as follows:
+* *Route 1* will have 50 domains associated to it, and requires HTTPS for all inbound requests. *Route 1* specifies 80 paths.
+* *Route 2* will have 25 domains associated to it. *Route 2* specifies 25 paths, and supports both the HTTP and HTTPS protocols.
+
+The following calculation illustrates how to determine the composite route metric for this scenario:
+
+```
+Profile composite route metric = Route 1 composite route metric + Route 2 composite route metric
+= Route 1 [HTTPS (50 Domains * 80 Paths)] + Route 2 [HTTP (25 Domains * 25 Paths) + HTTPS(25 Domains * 25 Paths)]
+= [50 * 80] + [(25 * 25) + (25 * 25)]
+= 5250
+```
+
+The calculated metric of 5250 exceeds the limit of 5000, so you can't configure a Front Door profile in this way.
+
+## Mitigation
+
+If your profile's composite route metric exceed 5000, consider the following mitigation strategies:
+
+- Deploy multiple Front Door profiles, and spread your routes across them. The composite route limit applies within a single profile.
+- Use [wildcard domains](front-door-wildcard-domain.md) instead of specifying subdomains individually, which might help to reduce the number of domains in your profile.
+- Require HTTPS for inbound traffic, which reduces the number of HTTP routes in your profile and also improves your solution's security.
+
+## Next steps
+
+Learn how to [create a Front Door](quickstart-create-front-door.md).
frontdoor Concept Rule Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-rule-set.md
For more quota limit, refer to [Azure subscription and service limits, quotas an
## ARM template support
-Rule Sets can be configured using Azure Resource Manager templates. [See an example template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-standard-premium-rule-set). You can customize the behavior by using the JSON or Bicep snippets included in the documentation examples for [match conditions](concept-rule-set-match-conditions.md) and [actions](concept-rule-set-actions.md).
+Rule Sets can be configured using Azure Resource Manager templates. [See an example template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-rule-set). You can customize the behavior by using the JSON or Bicep snippets included in the documentation examples for [match conditions](concept-rule-set-match-conditions.md) and [actions](concept-rule-set-actions.md).
## Next steps
iot-dps How To Control Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-control-access.md
The result, which would grant access to read all enrollment records, would be:
`SharedAccessSignature sr=mydps.azure-devices-provisioning.net&sig=JdyscqTpXdEJs49elIUCcohw2DlFDR3zfH5KqGJo4r4%3D&se=1456973447&skn=enrollmentread`
-## Reference topics:
-
-The following reference topics provide you with more information about controlling access to your IoT Device Provisioning Service.
-
-### Device Provisioning Service permissions
+## Device Provisioning Service permissions
The following table lists the permissions you can use to control access to your IoT Device Provisioning Service.
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-proxy-support.md
Title: Configure devices for network proxies - Azure IoT Edge | Microsoft Docs
description: How to configure the Azure IoT Edge runtime and any internet-facing IoT Edge modules to communicate through a proxy server. Previously updated : 09/03/2020 Last updated : 02/28/2022
If the proxy you're attempting to use performs traffic inspection on TLS-secured
To use a proxy that performs traffic inspection, you must use either shared access signature authentication or have IoT Hub and the IoT Hub device provisioning service added to an allowlist to avoid inspection.
+## Fully qualified domain names (FQDNs) of destinations that IoT Edge communicates with
+
+If your proxy has a firewall that requires you to allow-list all FQDNs for internet connectivity, review the list from [Allow connections from IoT Edge devices](production-checklist.md#allow-connections-from-iot-edge-devices) to determine which FQDNs to add.
+ ## Next steps Learn more about the roles of the [IoT Edge runtime](iot-edge-runtime.md).
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
description: How to configure an IoT Edge device to connect to Azure IoT Edge ga
Previously updated : 01/09/2022 Last updated : 02/28/2022
Make sure that the user **iotedge** has read permissions for the directory holdi
1. *If this device is a child device*, find the **Parent hostname** section. Uncomment and update the `parent_hostname` parameter to be the FQDN or IP address of the parent device, matching whatever was provided as the hostname in the parent device's config file.
+ ```toml
+ parent_hostname = "my-parent-device"
+ ```
+ 1. Find the **Trust bundle cert** section. Uncomment and update the `trust_bundle_cert` parameter with the file URI to the root CA certificate on your device. 1. Verify your IoT Edge device will use the correct version of the IoT Edge agent when it starts up.
iot-edge How To Provision Devices At Scale Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-symmetric.md
Have the following information ready:
1. Update the values of `scope_id`, `registration_id`, and `symmetric_key` with your DPS and device information.
-1. Optionally, use the `always_reprovision_on_startup` or `dynamic_reprovisioning` lines to configure your device's reprovisioning behavior. If a device is set to reprovision on startup, it will always attempt to provision with DPS first and then fall back to the provisioning backup if that fails. If a device is set to dynamically reprovision itself, IoT Edge will restart and reprovision if a reprovisioning event is detected. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
+1. Optionally, use the `always_reprovision_on_startup` or `dynamic_reprovisioning` lines to configure your device's reprovisioning behavior. If a device is set to reprovision on startup, it will always attempt to provision with DPS first and then fall back to the provisioning backup if that fails. If a device is set to dynamically reprovision itself, IoT Edge (and all modules) will restart and reprovision if a reprovisioning event is detected, like if the device is moved from one IoT Hub to another. Specifically, IoT Edge checks for `bad_credential` or `device_disabled` errors from the SDK to detect the reprovision event. To trigger this event manually, disable the device in IoT Hub. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
1. Restart the IoT Edge runtime so that it picks up all the configuration changes that you made on the device.
iot-edge How To Provision Devices At Scale Linux Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-tpm.md
After the runtime is installed on your device, configure the device with the inf
1. Update the values of `scope_id` and `registration_id` with your device provisioning service and device information. The `scope_id` value is the **ID Scope** from your device provisioning service instance's overview page.
-1. Optionally, use the `always_reprovision_on_startup` or `dynamic_reprovisioning` lines to configure your device's reprovisioning behavior. If a device is set to reprovision on startup, it will always attempt to provision with the device provisioning service first and then fall back to the provisioning backup if that fails. If a device is set to dynamically provision itself, IoT Edge will restart and reprovision if a reprovisioning event is detected. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
+1. Optionally, use the `always_reprovision_on_startup` or `dynamic_reprovisioning` lines to configure your device's reprovisioning behavior. If a device is set to reprovision on startup, it will always attempt to provision with DPS first and then fall back to the provisioning backup if that fails. If a device is set to dynamically reprovision itself, IoT Edge (and all modules) will restart and reprovision if a reprovisioning event is detected, like if the device is moved from one IoT Hub to another. Specifically, IoT Edge checks for `bad_credential` or `device_disabled` errors from the SDK to detect the reprovision event. To trigger this event manually, disable the device in IoT Hub. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
1. Save and close the file.
iot-edge How To Provision Devices At Scale Linux X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-x509.md
Title: Create and provision IoT Edge devices at scale using X.509 certificates o
description: Use X.509 certificates to test provisioning devices at scale for Azure IoT Edge with device provisioning service Previously updated : 10/29/2021 Last updated : 02/28/2022
Have the following information ready:
1. Optionally, provide the `registration_id` for the device, which needs to match the common name (CN) of the identity certificate. If you leave that line commented out, the CN will automatically be applied.
-1. Optionally, use the `always_reprovision_on_startup` or `dynamic_reprovisioning` lines to configure your device's reprovisioning behavior. If a device is set to reprovision on startup, it will always attempt to provision with DPS first and then fall back to the provisioning backup if that fails. If a device is set to dynamically reprovision itself, IoT Edge will restart and reprovision if a reprovisioning event is detected. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
+1. Optionally, use the `always_reprovision_on_startup` or `dynamic_reprovisioning` lines to configure your device's reprovisioning behavior. If a device is set to reprovision on startup, it will always attempt to provision with DPS first and then fall back to the provisioning backup if that fails. If a device is set to dynamically reprovision itself, IoT Edge (and all modules) will restart and reprovision if a reprovisioning event is detected, like if the device is moved from one IoT Hub to another. Specifically, IoT Edge checks for `bad_credential` or `device_disabled` errors from the SDK to detect the reprovision event. To trigger this event manually, disable the device in IoT Hub. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
1. Save and close the config.yaml file.
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
If your networking setup requires that you explicitly permit connections made fr
* **IoT Edge hub** opens a single persistent AMQP connection or multiple MQTT connections to IoT Hub, possibly over WebSockets. * **IoT Edge service** makes intermittent HTTPS calls to IoT Hub.
-In all three cases, the DNS name would match the pattern \*.azure-devices.net.
+In all three cases, the fully-qualified domain name (FQDN) would match the pattern `\*.azure-devices.net`.
-Additionally, the **Container engine** makes calls to container registries over HTTPS. To retrieve the IoT Edge runtime container images, the DNS name is mcr.microsoft.com. The container engine connects to other registries as configured in the deployment.
+Additionally, the **Container engine** makes calls to container registries over HTTPS. To retrieve the IoT Edge runtime container images, the FQDN is `mcr.microsoft.com`. The container engine connects to other registries as configured in the deployment.
This checklist is a starting point for firewall rules:
- | URL (\* = wildcard) | Outbound TCP Ports | Usage |
+ | FQDN (\* = wildcard) | Outbound TCP Ports | Usage |
| -- | -- | -- |
- | mcr.microsoft.com | 443 | Microsoft Container Registry |
- | global.azure-devices-provisioning.net | 443 | DPS access (optional) |
- | \*.azurecr.io | 443 | Personal and third-party container registries |
- | \*.blob.core.windows.net | 443 | Download Azure Container Registry image deltas from blob storage |
- | \*.azure-devices.net | 5671, 8883, 443 | IoT Hub access |
- | \*.docker.io | 443 | Docker Hub access (optional) |
+ | `mcr.microsoft.com` | 443 | Microsoft Container Registry |
+ | `global.azure-devices-provisioning.net` | 443 | [Device Provisioning Service](../iot-dps/about-iot-dps.md) access (optional) |
+ | `\*.azurecr.io` | 443 | Personal and third-party container registries |
+ | `\*.blob.core.windows.net` | 443 | Download Azure Container Registry image deltas from blob storage |
+ | `\*.azure-devices.net` | 5671, 8883, 443<sup>1</sup> | IoT Hub access |
+ | `\*.docker.io` | 443 | Docker Hub access (optional) |
+
+<sup>1</sup>Open port 8883 for secure MQTT or port 5671 for secure AMQP. If you can only make connections via port 443 then either of these protocols can be run through a WebSocket tunnel.
+
+Since the IP address of an IoT hub can change without notice, always use the FQDN to allow-list configuration. To learn more, see [Understanding the IP address of your IoT hub](../iot-hub/iot-hub-understand-ip-address.md).
Some of these firewall rules are inherited from Azure Container Registry. For more information, see [Configure rules to access an Azure container registry behind a firewall](../container-registry/container-registry-firewall-access-rules.md).
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md
description: Use this article to resolve common issues encountered when deployin
Previously updated : 03/01/2021 Last updated : 02/28/2022
IoT Edge devices behind a gateway get their module images from the parent IoT Ed
Make sure the parent IoT Edge device can receive incoming requests from the child IoT Edge device. Open network traffic on ports 443 and 6617 for requests coming from the child device.
+## IoT Edge behind a gateway cannot connect when migrating from one IoT hub to another
+
+**Observed behavior:**
+
+When attempting to migrate a hierarchy of IoT Edge devices from one IoT hub to another, the top level parent IoT Edge device can connect to IoT Hub, but downstream IoT Edge devices cannot. The logs report `Unable to authenticate client downstream-device/$edgeAgent with module credentials`.
+
+**Root cause:**
+
+The credentials for the downstream devices were not updated properly when the migration to the new IoT hub happened. Because of this, `edgeAgent` and `edgeHub` modules were set to have authentication type of `none` (default if not set explicitly). During connection, the modules on the downstream devices use old credentials, causing the authentication to fail.
+
+**Resolution:**
+
+When migrating to the new IoT hub (assuming not using DPS), follow these steps in order:
+1. Follow [this guide to export and then import device identities](../iot-hub/iot-hub-bulk-identity-mgmt.md) from the old IoT hub to the new one
+1. Reconfigure all IoT Edge deployments and configurations in the new IoT hub
+1. Reconfigure all parent-child device relationships in the new IoT hub
+1. Update each device to point to the new IoT hub hostname (`iothub_hostname` under `[provisioning]` in `config.toml`)
+1. If you chose to exclude authentication keys during the device export, reconfigure each device with the new keys given by the new IoT hub (`device_id_pk` under `[provisioning.authentication]` in `config.toml`)
+1. Restart the top-level parent Edge device first, make sure it's up and running
+1. Restart each device in hierarchy level by level from top to the bottom
+ :::moniker-end <!-- end 1.2 -->
key-vault Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-service.md
Aliases: <your-key-vault-name>.vault.azure.net
## Limitations and Design Considerations > [!NOTE]
-> The number of key vaults with private endpoints enabled per subscription is an adjustable limit. The limit shown below is the default limit. If you would like to request a limit increase for your service, please send an email to akv-privatelink@microsoft.com. We will approve these requests on a case by case basis.
+> The number of key vaults with private endpoints enabled per subscription is an adjustable limit. The limit shown below is the default limit. If you would like to request a limit increase for your service, please send an email to azurekeyvault@microsoft.com. We will approve these requests on a case by case basis.
**Pricing**: For pricing information, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
load-balancer Load Balancer Ha Ports Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ha-ports-overview.md
You can configure *one* public Standard Load Balancer resource for the backend r
- HA ports load-balancing rules are available only for internal Standard Load Balancer. - The combining of an HA ports load-balancing rule and a non-HA ports load-balancing rule pointing to same backend ipconfiguration(s) is **not** supported on a single Frontend IP configuration unless both have Floating IP enabled.-- Existing IP fragments will be forwarded by HA Ports load-balancing rules to same destination as first packet. IP fragmenting a UDP or TCP packet is not supported.
+- IP fragmenting is not supported. If a packet is already fragmented, it will be forwarded based on the 2-tuple [distribution mode](distribution-mode-concepts.md) when enabled on HA ports load-balancing rules.
- Flow symmetry (primarily for NVA scenarios) is supported with backend instance and a single NIC (and single IP configuration) only when used as shown in the diagram above and using HA Ports load-balancing rules. It is not provided in any other scenario. This means that two or more Load Balancer resources and their respective rules make independent decisions and are never coordinated. See the description and diagram for [network virtual appliances](#nva). When you are using multiple NICs or sandwiching the NVA between a public and internal Load Balancer, flow symmetry is not available. You may be able to work around this by source NAT'ing the ingress flow to the IP of the appliance to allow replies to arrive on the same NVA. However, we strongly recommend using a single NIC and using the reference architecture shown in the diagram above.
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-outbound-connections.md
Previously updated : 11/11/2021 Last updated : 03/01/2022
-# Using Source Network Address Translation (SNAT) for outbound connections
+# Use Source Network Address Translation (SNAT) for outbound connections
-Certain scenarios require virtual machines or compute instances to have outbound connectivity to the internet. The frontend IPs of an Azure public load balancer can be used to provide outbound connectivity to the internet for backend instances. This configuration uses **source network address translation (SNAT)** to translate virtual machine's private IP into Load Balancer's public IP address. SNAT maps the IP address of the backend to the public IP address of your load balancer. SNAT prevents outside sources from having a direct address to the backend instances.
+Certain scenarios require virtual machines or compute instances to have outbound connectivity to the internet. The frontend IPs of a public load balancer can be used to provide outbound connectivity to the internet for backend instances. This configuration uses **source network address translation (SNAT)** to translate virtual machine's private IP into the load balancer's public IP address. SNAT maps the IP address of the backend to the public IP address of your load balancer. SNAT prevents outside sources from having a direct address to the backend instances.
## <a name="scenarios"></a>Azure's outbound connectivity methods
-Outbound connectivity to the internet can be enabled in the following ways in Azure:
+The following methods are used to enable outbound connectivity in Azure:
| # | Method | Type of port allocation | Production-grade? | Rating | | | | | | |
-| 1 | Using the frontend IP address(es) of a Load Balancer for outbound via Outbound rules | Static, explicit | Yes, but not at scale | OK |
-| 2 | Associating a NAT gateway to the subnet | Dynamic, explicit | Yes | Best |
-| 3 | Assigning a Public IP to the Virtual Machine | Static, explicit | Yes | OK |
-| 4 | Using [default outbound access](../virtual-network/ip-services/default-outbound-access.md) | Implicit | No | Worst |
+| 1 | Use the frontend IP address(es) of a load balancer for outbound via outbound rules | Static, explicit | Yes, but not at scale | OK |
+| 2 | Associate a NAT gateway to the subnet | Dynamic, explicit | Yes | Best |
+| 3 | Assign a public IP to the virtual machine | Static, explicit | Yes | OK |
+| 4 | [Default outbound access](../virtual-network/ip-services/default-outbound-access.md) use | Implicit | No | Worst |
-## <a name="outboundrules"></a>1. Using the frontend IP address of a load balancer for outbound via outbound rules
-Outbound rules enable you to explicitly define SNAT (source network address translation) for a Standard Public Load Balancer. This configuration allows you to use the public IP or IPs of your load balancer for outbound connectivity of the backend instances.
+## <a name="outboundrules"></a>1. Use the frontend IP address of a load balancer for outbound via outbound rules
++
+Outbound rules enable you to explicitly define SNAT (source network address translation) for a standard SKU public load balancer. This configuration allows you to use the public IP or IPs of your load balancer for outbound connectivity of the backend instances.
This configuration enables: - IP masquerading+ - Simplifying your allowlists+ - Reduces the number of public IP resources for deployment
-With outbound rules, you have full declarative control over outbound internet connectivity. Outbound rules allow you to scale and tune this ability to your specific needs via manual port allocation. Manually allocating SNAT port based on the backend pool size and number of frontendIPConfigurations can help avoid SNAT exhaustion.
+With outbound rules, you have full declarative control over outbound internet connectivity. Outbound rules allow you to scale and tune this ability to your specific needs via manual port allocation. Manually allocating SNAT port based on the backend pool size and number of **frontendIPConfigurations** can help avoid SNAT exhaustion.
-You can manually allocate SNAT ports either by "ports per instance" or "maximum number of backend instances". If you have Virtual Machines in the backend, it's recommended that you allocate ports by "ports per instance" to get maximum SNAT port usage.
+You can manually allocate SNAT ports either by "ports per instance" or "maximum number of backend instances". If you have virtual machines in the backend, it's recommended that you allocate ports by "ports per instance" to get maximum SNAT port usage.
Ports per instance should be calculated as below: **Number of frontend IPs * 64K / Number of backend instances**
-If you have Virtual Machine Scale Sets in the backend, it's recommended to allocate ports by "maximum number of backend instances". If more VMs are added to the backend than remaining SNAT ports allowed, it's possible that virtual machine scale set scaling up could be blocked or that the new VMs will not receive sufficient SNAT ports.
+If you have Virtual Machine Scale Sets in the backend, it's recommended to allocate ports by "maximum number of backend instances". If more VMs are added to the backend than remaining SNAT ports allowed, it's possible that virtual machine scale set scaling up could be blocked or that the new VMs won't receive sufficient SNAT ports.
For more information about outbound rules, see [Outbound rules](outbound-rules.md).
-## 2. Associating a NAT gateway to the subnet
+## 2. Associate a NAT gateway to the subnet
+ Virtual Network NAT simplifies outbound-only Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. Outbound connectivity is possible without load balancer or public IP addresses directly attached to virtual machines. NAT is fully managed and highly resilient.
Using a NAT gateway is the best method for outbound connectivity. A NAT gateway
For more information about Azure Virtual Network NAT, see [What is Azure Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md).
-## 3. Assigning a public IP to the virtual machine
+## 3. Assign a public IP to the virtual machine
+ | Associations | Method | IP protocols | | - | | |
Azure uses the public IP assigned to the IP configuration of the instance's NIC
A public IP assigned to a VM is a 1:1 relationship (rather than 1: many) and implemented as a stateless 1:1 NAT.
+## 4. Default outbound access
-## 4. Default Outbound Access
>[!NOTE] > This method is **NOT recommended** for production workloads as it adds risk of exhausting ports. Please refrain from using this method for production workloads to avoid potential connection failures.
-Any Azure resource that does not have a Public IP associated to it, does not have a Load Balancer with Outbound Rules in front of it, is not part of virtual machine scale sets flexible orchestration mode, or does not have a NAT gateway resource associated to its subnet is allocated a minimal number of ports for outbound. This access is known as Default Outbound Access and is the worst method to provide outbound connectivity for your applications.
+Any Azure resource that doesn't have a public IP associated to it, doesn't have a load balancer with outbound Rules in front of it, isn't part of virtual machine scale sets flexible orchestration mode, or doesn't have a NAT gateway resource associated to its subnet is allocated a minimal number of ports for outbound. This access is known as default outbound access and is the worst method to provide outbound connectivity for your applications.
-Some other examples of Default Outbound Access are:
+Some other examples of default outbound access are:
-- when using Basic Load Balancer-- a virtual machine in Azure (without the associations mentioned above). In this case outbound connectivity is provided by the Default Outbound Access IP. This IP is a dynamic IP assigned by Azure that you can't control. Default SNAT isn't recommended for production workloads and can cause connectivity failures.-- VM in the backend pool of a Load Balancer without outbound rules. As a result, you use the frontend IP address of a load balancer for outbound and inbound and are more prone to connectivity failures from SNAT port exhaustion.
+- Use of a basic SKU load balancer
+- A virtual machine in Azure (without the associations mentioned above). In this case outbound connectivity is provided by the default outbound access IP. This IP is a dynamic IP assigned by Azure that you can't control. Default SNAT isn't recommended for production workloads and can cause connectivity failures.
+- A virtual machine in the backend pool of a load balancer without outbound rules. As a result, you use the frontend IP address of a load balancer for outbound and inbound and are more prone to connectivity failures from SNAT port exhaustion.
### What are SNAT ports?
Ports are used to generate unique identifiers used to maintain distinct flows. T
If a port is used for inbound connections, it has a **listener** for inbound connection requests on that port. That port can't be used for outbound connections. To establish an outbound connection, an **ephemeral port** is used to provide the destination with a port on which to communicate and maintain a distinct traffic flow. When these ephemeral ports are used for SNAT, they're called **SNAT ports**.
-By definition, every IP address has 65,535 ports. Each port can either be used for inbound or outbound connections for TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). When a public IP address is added as a frontend IP to a load balancer, 64,000 ports are eligible for SNAT. While all public IPs that are added as frontend IPs can be allocated, frontend IPs are consumed one at a time. For example, if two backend instances are allocated 64,000 ports each, with access to 2 frontend IPs, both backend instances will consume ports from the first frontend IP until all 64,000 ports have been exhausted.
+By definition, every IP address has 65,535 ports. Each port can either be used for inbound or outbound connections for TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). When a public IP address is added as a frontend IP to a load balancer, 64,000 ports are eligible for SNAT. While all public IPs that are added as frontend IPs can be allocated, frontend IPs are consumed one at a time. For example, if two backend instances are allocated 64,000 ports each, with access to two frontend IPs, both backend instances will consume ports from the first frontend IP until all 64,000 ports have been exhausted.
A port used for a load balancing or inbound NAT rule consumes eight ports from the 64,000 ports. This usage reduces the number of ports eligible for SNAT. If a load-balancing or inbound NAT rule is in the same range of eight as another, it doesn't use extra ports.
A port used for a load balancing or inbound NAT rule consumes eight ports from t
When a VM creates an outbound flow, Azure translates the source IP address to an ephemeral IP address. This translation is done via SNAT.
-If using SNAT without Outbound rules via a Public Load Balancer, SNAT ports are pre-allocated as described in the default SNAT ports allocation table below.
+If using SNAT without outbound rules via a public load balancer, SNAT ports are pre-allocated as described in the default SNAT ports allocation table below.
## <a name="preallocatedports"></a> Default port allocation table
The following <a name="snatporttable"></a>table shows the SNAT port preallocatio
| 401-800 | 64 | | 801-1,000 | 32 |
-## Exhausting ports
+## Port exhaustion
Every connection to the same destination IP and destination port will use a SNAT port. This connection maintains a distinct **traffic flow** from the backend instance or **client** to a **server**. This process gives the server a distinct port on which to address traffic. Without this process, the client machine is unaware of which flow a packet is part of. Imagine having multiple browsers going to https://www.microsoft.com, which is: * Destination IP = 23.53.254.142+ * Destination Port = 443+ * Protocol = TCP Without different destination ports for the return traffic (the SNAT port used to establish the connection), the client will have no way to separate one query result from another.
-Outbound connections can burst. A backend instance can be allocated insufficient ports. Use **connection reuse** functionality within your application. Without **connection reuse**, the risk of SNAT **port exhaustion** is increased. For more information about connection pooling with Azure App Service, see [Troubleshooting intermittent outbound connection errors in Azure App Service](../app-service/troubleshoot-intermittent-outbound-connection-errors.md#avoiding-the-problem)
+Outbound connections can burst. A backend instance can be allocated insufficient ports. Use **connection reuse** functionality within your application. Without **connection reuse**, the risk of SNAT **port exhaustion** is increased.
+
+For more information about connection pooling with Azure App Service, see [Troubleshooting intermittent outbound connection errors in Azure App Service](../app-service/troubleshoot-intermittent-outbound-connection-errors.md#avoiding-the-problem)
New outbound connections to a destination IP will fail when port exhaustion occurs. Connections will succeed when a port becomes available. This exhaustion occurs when the 64,000 ports from an IP address are spread thin across many backend instances. For guidance on mitigation of SNAT port exhaustion, see the [troubleshooting guide](./troubleshoot-outbound-connection.md).
For UDP connections, the load balancer uses a **port-restricted cone NAT** algor
A port is reused for an unlimited number of connections. The port is only reused if the destination IP or port is different. - ## Constraints * When a connection is idle with no new packets being sent, the ports will be released after 4 ΓÇô 120 minutes.+ * This threshold can be configured via outbound rules.+ * Each IP address provides 64,000 ports that can be used for SNAT.+ * Each port can be used for both TCP and UDP connections to a destination IP address+ * A UDP SNAT port is needed whether the destination port is unique or not. For every UDP connection to a destination IP, one UDP SNAT port is used.+ * A TCP SNAT port can be used for multiple connections to the same destination IP provided the destination ports are different.+ * SNAT exhaustion occurs when a backend instance runs out of given SNAT Ports. A load balancer can still have unused SNAT ports. If a backend instanceΓÇÖs used SNAT ports exceed its given SNAT ports, it will be unable to establish new outbound connections.+ * Fragmented packets will be dropped unless outbound is through an instance level public IP on the VM's NIC.
-* Secondary IP configurations of a network interface do not outbound communication (unless a Public IP is associated to it) via Load Balancer.
+
+* Secondary IP configurations of a network interface don't provide outbound communication (unless a public IP is associated to it) via a load balancer.
## Next steps
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
In this scenario, check that the underlying connection resource definition inclu
### [Consumption](#tab/consumption)
-For example, here's the underlying connection resource definition for an Azure Automation action in a Consumption logic app resource that uses a managed identity where the definition includes the `parameterValueType` property, which includes the `name` property set to `managedIdentityAuth` and the `values` property set to an empty object. Also note that the `apiVersion` property is set to `2018-07-01-preview`:
+For example, here's the underlying connection resource definition for an Azure Automation action in a Consumption logic app resource that uses a managed identity where the definition includes the `parameterValueSet` object, which has the `name` property set to `managedIdentityAuth` and the `values` property set to an empty object. Also note that the `apiVersion` property is set to `2018-07-01-preview`:
```json {
logic-apps Create Monitoring Tracking Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-monitoring-tracking-queries.md
Last updated 01/30/2020
# View and create queries for monitoring and tracking in Azure Monitor logs for Azure Logic Apps
+> [!NOTE]
+> This article applies only to Consumption logic apps. For information about monitoring Standard logic apps, review
+> [Enable or open Application Insights after deployment for Standard logic apps](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights).
+ You can view the underlying queries that produce the results from [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) and create queries that filter the results based your specific criteria. For example, you can find messages based on a specific interchange control number. Queries use the [Kusto query language](/azure/data-explorer/kusto/query/), which you can edit if you want to view different results. For more information, see [Azure Monitor log queries](/azure/data-explorer/kusto/query/). ## Prerequisites
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
In this example, the workflow runs when the Request trigger receives an inbound
![Screenshot that shows Outlook email as described in the example](./media/create-single-tenant-workflows-azure-portal/workflow-app-result-email.png)
-<a name="view-run-history"></a>
+<a name="review-run-history"></a>
## Review run history
-For a stateful workflow, after each workflow run, you can view the run history, including the status for the overall run, for the trigger, and for each action along with their inputs and outputs. In the Azure portal, run history and trigger histories appear at the workflow level, not the logic app level. To review the trigger histories outside the run history context, see [Review trigger histories](#view-trigger-histories).
+For a stateful workflow, after each workflow run, you can view the run history, including the status for the overall run, for the trigger, and for each action along with their inputs and outputs. In the Azure portal, run history and trigger histories appear at the workflow level, not the logic app level. To review the trigger histories outside the run history context, see [Review trigger histories](#review-trigger-history).
1. In the Azure portal, on the workflow menu, select **Overview**.
For a stateful workflow, after each workflow run, you can view the run history,
1. To further review the raw inputs and outputs for that step, select **Show raw inputs** or **Show raw outputs**.
-<a name="view-trigger-histories"></a>
+<a name="review-trigger-history"></a>
-## Review trigger histories
+## Review trigger history
-For a stateful workflow, you can review the trigger history for each run, including the trigger status along with inputs and outputs, separately from the [run history context](#view-run-history). In the Azure portal, trigger history and run history appear at the workflow level, not the logic app level. To find this historical data, follow these steps:
+For a stateful workflow, you can review the trigger history for each run, including the trigger status along with inputs and outputs, separately from the [run history context](#review-run-history). In the Azure portal, trigger history and run history appear at the workflow level, not the logic app level. To find this historical data, follow these steps:
1. In the Azure portal, on the workflow menu, select **Overview**.
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
ms.suite: integration Previously updated : 02/01/2022 Last updated : 03/02/2022
This example shows a resource definition for a nested logic app that permits inb
## Access to logic app operations
-You can permit only specific users or groups to run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, use [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has these specific roles:
+For the **Logic App (Consumption)** resource type only, you can set up permissions so that only specific users or groups can run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, use [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has the following specific roles:
* [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor): Lets you manage logic apps, but you can't change access to them.
logic-apps Monitor B2b Messages Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-b2b-messages-log-analytics.md
Last updated 01/30/2020
# Set up Azure Monitor logs and collect diagnostics data for B2B messages in Azure Logic Apps
+> [!NOTE]
+> This article applies only to Consumption logic apps. For information about monitoring Standard logic apps, review
+> [Enable or open Application Insights after deployment for Standard logic apps](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights).
+ After you set up B2B communication between trading partners in your integration account, those partners can exchange messages by using protocols such as AS2, X12, and EDIFACT. To check that this communication works the way you expect, you can set up [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md) for your integration account. [Azure Monitor](../azure-monitor/overview.md) helps you monitor your cloud and on-premises environments so that you can more easily maintain their availability and performance. By using Azure Monitor logs, you can record and store data about runtime data and events, such as trigger events, run events, and action events in a [Log Analytics workspace](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace). For messages, logging also collects information such as: * Message count and status
logic-apps Monitor Logic Apps Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps-log-analytics.md
Last updated 09/24/2020
# Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps
+> [!NOTE]
+> This article applies only to Consumption logic apps. For information about monitoring Standard logic apps, review
+> [Enable or open Application Insights after deployment for Standard logic apps](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights).
+ To get richer debugging information about your logic apps during runtime, you can set up and use [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md) to record and store information about runtime data and events, such as trigger events, run events, and action events in a [Log Analytics workspace](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace). [Azure Monitor](../azure-monitor/overview.md) helps you monitor your cloud and on-premises environments so that you can more easily maintain their availability and performance. By using Azure Monitor logs, you can create [log queries](../azure-monitor/logs/log-query-overview.md) that help you collect and review this information. You can also [use this diagnostics data with other Azure services](#extend-data), such as Azure Storage and Azure Event Hubs. To set up logging for your logic app, you can [enable Log Analytics when you create your logic app](#logging-for-new-logic-apps), or you can [install the Logic Apps Management solution](#install-management-solution) in your Log Analytics workspace for existing logic apps. This solution provides aggregated information for your logic app runs and includes specific details such as status, execution time, resubmission status, and correlation IDs. Then, to enable logging and creating queries for this information, [set up Azure Monitor logs](#set-up-resource-logs).
Each diagnostic event has details about your logic app and that event, for examp
} ```
- This example show multiple tracked properties:
+ This example shows multiple tracked properties:
``` json "HTTP": {
logic-apps Monitor Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps.md
Last updated 05/04/2020
# Monitor run status, review trigger history, and set up alerts for Azure Logic Apps
-After you [create and run a logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md), you can check that logic app's run status, [runs history](#review-runs-history), [trigger history](#review-trigger-history), and performance. To get notifications about failures or other possible problems, set up [alerts](#add-azure-alerts). For example, you can create an alert that detects "when more than five runs fail in an hour."
+> [!NOTE]
+> This article applies only to Consumption logic apps. For information about reviewing run status and monitoring for Standard logic apps,
+> review the following sections in [Create an integration workflow with single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md):
+> [Review run history](create-single-tenant-workflows-azure-portal.md#review-run-history), [Review trigger history](create-single-tenant-workflows-azure-portal.md#review-trigger-history), and [Enable or open Application Insights after deployment](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights).
+
+After you create and run a [Consumption logic app workflow](quickstart-create-first-logic-app-workflow.md), you can check that workflow's run status, [runs history](#review-runs-history), [trigger history](#review-trigger-history), and performance. To get notifications about failures or other possible problems, set up [alerts](#add-azure-alerts). For example, you can create an alert that detects "when more than five runs fail in an hour."
For real-time event monitoring and richer debugging, set up diagnostics logging for your logic app by using [Azure Monitor logs](../azure-monitor/overview.md). This Azure service helps you monitor your cloud and on-premises environments so that you can more easily maintain their availability and performance. You can then find and view events, such as trigger events, run events, and action events. By storing this information in [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md), you can create [log queries](../azure-monitor/logs/log-query-overview.md) that help you find and analyze this information. You can also use this diagnostic data with other Azure services, such as Azure Storage and Azure Event Hubs. For more information, see [Monitor logic apps by using Azure Monitor](../logic-apps/monitor-logic-apps-log-analytics.md).
For real-time event monitoring and richer debugging, set up diagnostics logging
Each time that the trigger fires for an item or event, the Logic Apps engine creates and runs a separate workflow instance for each item or event. By default, each workflow instance runs in parallel so that no workflow has to wait before starting a run. You can review what happened during that run, including the status for each step in the workflow plus the inputs and outputs for each step.
-1. In the [Azure portal](https://portal.azure.com), find and open your logic app in the Logic App Designer.
+1. In the [Azure portal](https://portal.azure.com), find and open your logic app workflow in the designer.
To find your logic app, in the main Azure search box, enter `logic apps`, and then select **Logic apps**.
Each time that the trigger fires for an item or event, the Logic Apps engine cre
Each logic app run starts with a trigger. The trigger history lists all the trigger attempts that your logic app made and information about the inputs and outputs for each trigger attempt.
-1. In the [Azure portal](https://portal.azure.com), find and open your logic app in the Logic App Designer.
+1. In the [Azure portal](https://portal.azure.com), find and open your logic app workflow in the designer.
- To find your logic app , in the main Azure search box, enter `logic apps`, and then select **Logic Apps**.
+ To find your logic app, in the main Azure search box, enter `logic apps`, and then select **Logic Apps**.
![Find and select "Logic Apps" service](./media/monitor-logic-apps/find-your-logic-app.png)
Each logic app run starts with a trigger. The trigger history lists all the trig
||| > [!TIP]
- > You can recheck the trigger without waiting for the next recurrence. On the overview toolbar, select **Run trigger**,
- > and select the trigger, which forces a check. Or, select **Run** on Logic Apps Designer toolbar.
+ > You can recheck the trigger without waiting for the next recurrence. On the overview toolbar, select **Run Trigger**,
+ > and select the trigger, which forces a check. Or, select **Run Trigger** on designer toolbar.
1. To view information about a specific trigger attempt, on the trigger pane, select that trigger event. If the list shows many trigger attempts, and you can't find the entry that you want, try filtering the list. If you don't find the data that you expect, try selecting **Refresh** on the toolbar.
logic-apps Quickstart Create First Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-first-logic-app-workflow.md
Title: Quickstart - Create automated workflows with Azure Logic Apps in the Azure portal
-description: Create your first automated integration workflow with multi-tenant Azure Logic Apps in the Azure portal.
+ Title: Quickstart - Create automated integration workflows in the Azure portal
+description: Create your first automated integration workflow using multi-tenant Azure Logic Apps in the Azure portal.
ms.suite: integration-+ Previously updated : 02/24/2022
-#Customer intent: As a developer, I want to create my first automated integration workflow by using Azure Logic Apps in the Azure portal
Last updated : 03/02/2022+
+#Customer intent: As a developer, I want to create my first automated integration workflow using Azure Logic Apps in the Azure portal.
# Quickstart: Create an integration workflow with multi-tenant Azure Logic Apps and the Azure portal
-This quickstart shows how to create an example automated workflow that integrates two services, an RSS feed for a website and an email account, when you use [*multi-tenant* Azure Logic Apps](logic-apps-overview.md). While this example is cloud-based, Azure Logic Apps supports workflows that connect apps, data, services, and systems across cloud, on premises, and hybrid environments. For more information about multi-tenant versus single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+This quickstart shows how to create an example automated workflow that integrates two services, an RSS feed for a website and an email account, and that runs in [*multi-tenant* Azure Logic Apps](logic-apps-overview.md). While this example is cloud-based, Azure Logic Apps supports workflows that connect apps, data, services, and systems across cloud, on premises, and hybrid environments. For more information about multi-tenant versus single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
In this example, you create a logic app resource and workflow that uses the RSS connector and the Office 365 Outlook connector. The resource runs in multi-tenant Azure Logic Apps and is based on the [Consumption pricing model](logic-apps-pricing.md#consumption-pricing). The RSS connector has a trigger that checks an RSS feed, based on a schedule. The Office 365 Outlook connector has an action that sends an email for each new item. The connectors in this example are only two among the [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) that you can use in a workflow.
The following screenshot shows the high-level example workflow:
As you progress through this quickstart, you'll learn these basic steps:
-* Create a logic app resource that runs in the multi-tenant Azure Logic Apps environment.
+* Create a Consumption logic app resource that runs in the multi-tenant Azure Logic Apps environment.
* Select the blank logic app template. * Add a trigger that specifies when to run the workflow. * Add an action that performs a task after the trigger fires.
To create and manage a logic app resource using other tools, review these other
## Prerequisites
-* Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* An email account from a service that works with Azure Logic Apps, such as Office 365 Outlook or Outlook.com. For other supported email providers, review [Connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
To create and manage a logic app resource using other tools, review these other
|-|-|-| | **Subscription** | <*Azure-subscription-name*> | The name of your Azure subscription. | | **Resource Group** | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) name, which must be unique across regions. This example uses "My-First-LA-RG". |
- | **Type** | **Consumption** | The logic app resource type and billing model to use for your resource: <p><p>- **Consumption**: This logic app resource type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). This example uses this **Consumption** model. <p>- **Standard**: This logic app resource type runs in single-tenant Azure Logic Apps and uses the [Standard billing model](logic-apps-pricing.md#standard-pricing). |
- | **Logic App name** | <*logic-app-name*> | Your logic app resource name, which must be unique across regions. This example uses "My-First-Logic-App". <p><p>**Important**: This name can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
+ | **Type** | **Consumption** | The logic app resource type and billing model to use for your resource: <p><p>- **Consumption**: This logic app resource type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). <p>- **Standard**: This logic app resource type runs in single-tenant Azure Logic Apps and uses the [Standard billing model](logic-apps-pricing.md#standard-pricing). <br><br>To continue following this quickstart, make sure that you select the **Consumption** option. |
+ | **Logic App name** | <*logic-app-name*> | Your logic app resource name, which must be unique across regions. This example uses **My-First-Logic-App**. <p><p>**Important**: This name can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
| **Publish** | **Workflow** | Available only when you select the [**Standard** logic app resource type](create-single-tenant-workflows-azure-portal.md). By default, **Workflow** is selected for deployment to [single-tenant Azure Logic Apps](single-tenant-overview-compare.md) and creates an empty logic app resource where you add your first workflow. <p><p>**Note**: Currently, the **Docker Container** option requires a [*custom location*](../azure-arc/kubernetes/conceptual-custom-locations.md) on an Azure Arc enabled Kubernetes cluster, which you can use with [Azure Arc enabled Logic Apps (Standard)](azure-arc-enabled-logic-apps-overview.md). The resource locations for your logic app, custom location, and cluster must all be the same. |
- | **Region** | <*Azure-region*> | The Azure datacenter region where to store your app's information. This example uses "West US". <p>**Note**: If your subscription is associated with an [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md), this list includes those environments. |
+ | **Region** | <*Azure-region*> | The Azure datacenter region where to store your app's information. This example selects the **West US** region. <p>**Note**: If your subscription is associated with an [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md), this list includes those environments. |
| **Enable log analytics** | **No** | Available only when you select the **Consumption** logic app resource type. <p><p>Change this option only when you want to enable diagnostic logging. For this example, leave this option unselected. | |||| ![Screenshot showing the Azure portal and logic app resource creation page with details for new logic app.](./media/quickstart-create-first-logic-app-workflow/create-logic-app-settings.png)
-1. When you're ready, select **Review + Create**. On the validation page, confirm the details that you provided, and select **Create**.
+1. When you're ready, select **Review + Create**.
+
+1. On the validation page that appears, confirm all the information that you provided, and select **Create**.
## Select the blank template
To create and manage a logic app resource using other tools, review these other
![Screenshot showing the resource deployment page and selected button, "Go to resource".](./media/quickstart-create-first-logic-app-workflow/go-to-new-logic-app-resource.png)
- The workflow designer opens and shows a page with an introduction video and commonly used triggers.
+ The designer's template page opens to show an introduction video and commonly used triggers.
+
+1. Scroll down past the video and the section named **Start with a common trigger**.
1. Under **Templates**, select **Blank Logic App**.
- ![Screenshot showing the workflow designer, template gallery, and selected template, "Blank Logic App".](./media/quickstart-create-first-logic-app-workflow/choose-logic-app-template.png)
+ ![Screenshot showing the template gallery and selected template, "Blank Logic App".](./media/quickstart-create-first-logic-app-workflow/choose-logic-app-template.png)
- After you select the template, the designer now shows an empty workflow surface.
+ After you select the template, the designer shows an empty workflow.
<a name="add-rss-trigger"></a> ## Add the trigger
-A workflow always starts with a single [trigger](../logic-apps/logic-apps-overview.md#how-do-logic-apps-work), which specifies the condition to meet before running any actions in the workflow. Each time the trigger fires, Azure Logic Apps creates and runs a workflow instance. If the trigger doesn't fire, no instance is created nor run. You can start a workflow by choosing from many different triggers.
+A workflow always starts with a single [trigger](logic-apps-overview.md#how-do-logic-apps-work), which specifies the condition to meet before running any actions in the workflow. Each time the trigger fires, Azure Logic Apps creates and runs a workflow instance. If the trigger doesn't fire, no instance is created nor run. You can start a workflow by choosing from many different triggers.
This example uses an RSS trigger that checks an RSS feed, based on a schedule. If a new item exists in the feed, the trigger fires, and a new workflow instance starts to run. If multiple new items exist between checks, the trigger fires for each item, and a separate new workflow instance runs for each item.
-1. In the workflow designer, under the search box, select **All**.
+1. Under the designer search box, select **All**.
-1. To find the RSS trigger, in the search box, enter `rss`. From the **Triggers** list, select the RSS trigger, **When a feed item is published**.
+1. In the designer search box, enter **rss**. From the **Triggers** list, select the RSS trigger, **When a feed item is published**.
![Screenshot showing the workflow designer with "rss" in the search box and the selected RSS trigger, "When a feed item is published".](./media/quickstart-create-first-logic-app-workflow/add-rss-trigger-new-feed-item.png)
This example uses an RSS trigger that checks an RSS feed, based on a schedule. I
![Screenshot that shows the collapsed trigger shape.](./media/quickstart-create-first-logic-app-workflow/collapse-trigger-shape.png)
-1. When you're done, save your logic app, which instantly goes live in the Azure portal. On the designer toolbar, select **Save**.
+1. On the designer toolbar, select **Save** to save your logic app, which instantly goes live in the Azure portal.
- The trigger won't do anything other than check the RSS feed. So, you need to add an action that defines what happens when the trigger fires.
+ The trigger doesn't do anything other than check the RSS feed. So, you need to add an action that defines what happens when the trigger fires.
<a name="add-email-action"></a> ## Add an action
-Following a trigger, an [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) is a subsequent step that runs some operation in the workflow. Any action can use the outputs from the previous step, which can be the trigger or another action. You can choose from many different actions, add multiple actions up to the [limit per workflow](logic-apps-limits-and-config.md#definition-limits), and even create different action paths.
+Following a trigger, an [action](logic-apps-overview.md#logic-app-concepts) is a subsequent step that runs some operation in the workflow. Any action can use the outputs from the previous step, which can be the trigger or another action. You can choose from many different actions, add multiple actions up to the [limit per workflow](logic-apps-limits-and-config.md#definition-limits), and even create different action paths.
This example uses an Office 365 Outlook action that sends an email each time that the trigger fires for a new RSS feed item. If multiple new items exist between checks, you receive multiple emails.
This example uses an Office 365 Outlook action that sends an email each time tha
![Screenshot showing the workflow trigger and the selected button, "New step".](./media/quickstart-create-first-logic-app-workflow/add-new-step-under-trigger.png)
-1. Under **Choose an operation** and the search box, select **All**.
+1. Under the **Choose an operation** search box, select **All**.
-1. In the search box, enter `send an email` so that you can find connectors that offer this action. To filter the **Actions** list to a specific app or service, select that app or service first.
+1. In the search box, enter **send an email**. To filter the **Actions** list to a specific app or service, select the icon for that app or service first.
For example, if you have a Microsoft work or school account and want to use Office 365 Outlook, select **Office 365 Outlook**. Or, if you have a personal Microsoft account, select **Outlook.com**. This example continues with Office 365 Outlook.
This example uses an Office 365 Outlook action that sends an email each time tha
> For example, if you use use Azure Resource Manager templates for deployment, you can increase security on inputs > that change often by parameterizing values such as connection details. For more information, review these topics: >
- > * [Template parameters for deployment](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md#template-parameters)
- > * [Authorize OAuth connections](../logic-apps/logic-apps-deploy-azure-resource-manager-templates.md#authorize-oauth-connections)
- > * [Authenticate access with managed identities](../logic-apps/create-managed-service-identity.md)
- > * [Authenticate connections for logic app deployment](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md#authenticate-connections)
+ > * [Template parameters for deployment](logic-apps-azure-resource-manager-templates-overview.md#template-parameters)
+ > * [Authorize OAuth connections](logic-apps-deploy-azure-resource-manager-templates.md#authorize-oauth-connections)
+ > * [Authenticate access with managed identities](create-managed-service-identity.md)
+ > * [Authenticate connections for logic app deployment](logic-apps-azure-resource-manager-templates-overview.md#authenticate-connections)
1. In the **Send an email** action, specify the information to include in the email.
This example uses an Office 365 Outlook action that sends an email each time tha
## Run your workflow
-To check that the workflow runs correctly, you can wait for the trigger to check the RSS feed based on the set schedule. Or, you can manually run the workflow by selecting **Run** on the workflow designer toolbar, as shown in the following screenshot.
+To check that the workflow runs correctly, you can wait for the trigger to check the RSS feed based on the set schedule. Or, you can manually run the workflow by selecting **Run Trigger** on the designer toolbar, as shown in the following screenshot.
![Screenshot showing the workflow designer and the "Run" button selected on the designer toolbar.](./media/quickstart-create-first-logic-app-workflow/run-logic-app-test.png)
When you're done with this quickstart, delete the sample logic app resource and
In this quickstart, you created your first logic app workflow in the Azure portal to check an RSS feed, and send an email for each new item. To learn more about advanced scheduled workflows, see the following tutorial: > [!div class="nextstepaction"]
-> [Check traffic with a scheduled-based logic app](../logic-apps/tutorial-build-schedule-recurring-logic-app-workflow.md)
+> [Check traffic with a schedule-based logic app workflow](tutorial-build-schedule-recurring-logic-app-workflow.md)
machine-learning How To Search Cross Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-search-cross-workspace.md
- Title: Search for machine learning assets across workspaces-
-description: Learn about global search in Azure Machine Learning.
------ Previously updated : 2/16/2022--
-# Search for Azure Machine Learning assets across multiple workspaces (Public Preview)
-
-## Overview
-
-Users can now search for machine learning assets such as jobs, models, and components across all workspaces, resource groups, and subscriptions in their organization through a unified global view.
--
-## Get started
-
-### Global homepage
-
-From this centralized global view, select from recently visited workspaces or browse documentation and tutorial resources.
-
-![Screenshot showing the global view homepage.](./media/how-to-search-cross-workspace/global-home.png)
-
-### Search
-
-Type search text into the global search bar and hit enter to trigger a 'contains' search.
-The search will scan across all metadata fields for the given asset. Results are sorted by relevance as determined by the relevance weightings for the asset columns.
-
-![Screenshot showing the search bar experience.](./media/how-to-search-cross-workspace/search-bar.png)
-
-Use the asset quick links to navigate to search results for jobs, models, and components created by you.
-
-Change the scope of applicable subscriptions and workspaces by clicking the 'Change' link.
-
-![Screenshot showing how to change scope of workspaces and subscriptions reflected in results.](./media/how-to-search-cross-workspace/settings.png)
--
-### Structured search
-
-Click on any number of filters to create more specific search queries. The following filters are supported:
-* Job:
-* Model:
-* Component:
-* Tags:
-* SubmittedBy:
-
-If an asset filter (job, model, component) is present, results will be scoped to those tabs. Other filters will apply to all assets unless an asset filter is also present in the query. Similarly, free text search can be provided alongside filters but will be scoped to the tabs chosen by asset filters if present.
-
-> [!TIP]
-> * Filters search for exact matches of text. Use free text queries for a contains search.
-> * Quotations are required around values that include spaces or other special characters.
-> * If duplicate filters are provided, only the first will be recognized in search results.
-> * Input text of any language is supported but filter strings must match the provided options (ex. submittedBy:).
-> * The tags filter can accept multiple key:value pairs separated by a comma (ex. tags:"key1:value1, key2:value2").
--
-### Results
-
-Explore the Jobs, Models, and Components tabs to view all search results. Click on an asset to be directed to the details page in the context of the relevant workspace. Results from workspaces a user doesn't have access to won't be displayed, click on the 'details' button to view the list of workspaces.
-
-![Screenshot showing search results of query.](./media/how-to-search-cross-workspace/results.png)
-
-### Filters
-
-To add more specificity to the search results, use the column filters sidebar.
-
-### Custom views
-
-Customize the display of columns in the search results table. These views can be saved and shared as well.
-
-![Screenshot showing how to create custom column views on the search results page.](./media/how-to-search-cross-workspace/custom-views.jpg)
--
-### Known issues
-
-If you've previously used this feature, a search result error may occur. Reselect your preferred workspaces in the Directory + Subscription + Workspace tab.
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
Previously updated : 11/05/2021 Last updated : 03/02/2022
aks_target.wait_for_completion(show_output = True)
Azure Container Instances are dynamically created when deploying a model. To enable Azure Machine Learning to create ACI inside the virtual network, you must enable __subnet delegation__ for the subnet used by the deployment. To use ACI in a virtual network to your workspace, use the following steps:
-1. To enable subnet delegation on your virtual network, use the information in the [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md) article. You can enable delegation when creating a virtual network, or add it to an existing network.
+1. Your user account must have permissions to the following actions in Azure role-based access control (Azure RBAC):
- > [!IMPORTANT]
- > When enabling delegation, use `Microsoft.ContainerInstance/containerGroups` as the __Delegate subnet to service__ value.
+ * `Microsoft.Network/virtualNetworks/*/read` on the virtual network resource. This permission isn't needed for Azure Resource Manager template deployments.
+ * `Microsoft.Network/virtualNetworks/subnet/join/action` on the subnet resource.
+
+1. In the [Azure portal](https://portal.azure.com), search for the name of the virtual network. When it appears in the search results, select it.
+1. Select **Subnets**, under **SETTINGS**, and then select the subnet.
+1. On the subnet page, for the **Subnet delegation** list, select `Microsoft.ContainerInstance/containerGroups`.
+1. Deploy the model using [AciWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none--vnet-name-none--subnet-name-none-), use the `vnet_name` and `subnet_name` parameters. Set these parameters to the virtual network name and subnet where you enabled delegation.
-2. Deploy the model using [AciWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none--vnet-name-none--subnet-name-none-), use the `vnet_name` and `subnet_name` parameters. Set these parameters to the virtual network name and subnet where you enabled delegation.
+For more information, see [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md).
## Limit outbound connectivity from the virtual network
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
Previously updated : 01/14/2020 Last updated : 02/28/2022 #Customer intent: As a Python PyTorch developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
In this article, learn how to run your [PyTorch](https://pytorch.org/) training scripts at enterprise scale using Azure Machine Learning.
-The example scripts in this article are used to classify chicken and turkey images to build a deep learning neural network (DNN) based on [PyTorch's transfer learning tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. Transfer learning shortens the training process by requiring less data, time, and compute resources than training from scratch. See the [deep learning vs machine learning](./concept-deep-learning-vs-machine-learning.md#what-is-transfer-learning) article to learn more about transfer learning.
+The example scripts in this article are used to classify chicken and turkey images to build a deep learning neural network (DNN) based on [PyTorch's transfer learning tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. Transfer learning shortens the training process by requiring less data, time, and compute resources than training from scratch. To learn more about transfer learning, see the [deep learning vs machine learning](./concept-deep-learning-vs-machine-learning.md#what-is-transfer-learning) article.
Whether you're training a deep learning PyTorch model from the ground-up or you're bringing an existing model into the cloud, you can use Azure Machine Learning to scale out open-source training jobs using elastic cloud compute resources. You can build, deploy, version, and monitor production-grade models with Azure Machine Learning.
shutil.copy('pytorch_train.py', project_folder)
Create a compute target for your PyTorch job to run on. In this example, create a GPU-enabled Azure Machine Learning compute cluster. ```Python+
+# Choose a name for your CPU cluster
cluster_name = "gpu-cluster"
+# Verify that cluster does not exist already
try: compute_target = ComputeTarget(workspace=ws, name=cluster_name) print('Found existing compute target')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6', max_nodes=4)
+ # Create the cluster with the specified name and configuration
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
+ # Wait for the cluster to complete, show the output log
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20) ```
+If you instead want to create a CPU cluster, provide a different VM size to the vm_size parameter, such as STANDARD_D2_V2.
+ [!INCLUDE [low-pri-note](../../includes/machine-learning-low-pri-vm.md)] For more information on compute targets, see the [what is a compute target](concept-compute-target.md) article.
To define the [Azure ML Environment](concept-environments.md) that encapsulates
#### Use a curated environment
-Azure ML provides prebuilt, curated environments if you don't want to define your own environment. Azure ML has several CPU and GPU curated environments for PyTorch corresponding to different versions of PyTorch. For more info, see [here](resource-curated-environments.md).
+Azure ML provides prebuilt, [curated environments](resource-curated-environments.md) if you don't want to define your own environment. There are several CPU and GPU curated environments for PyTorch corresponding to different versions of PyTorch.
If you want to use a curated environment, you can run the following command instead:
pytorch_env = Environment.get(workspace=ws, name=curated_env_name)
``` To see the packages included in the curated environment, you can write out the conda dependencies to disk:+ ```python pytorch_env.save_to_directory(path=curated_env_name) ``` Make sure the curated environment includes all the dependencies required by your training script. If not, you'll have to modify the environment to include the missing dependencies. If the environment is modified, you'll have to give it a new name, as the 'AzureML' prefix is reserved for curated environments. If you modified the conda dependencies YAML file, you can create a new environment from it with a new name, for example:+ ```python pytorch_env = Environment.from_conda_specification(name='pytorch-1.6-gpu', file_path='./conda_dependencies.yml') ``` If you had instead modified the curated environment object directly, you can clone that environment with a new name:+ ```python pytorch_env = pytorch_env.clone(new_name='pytorch-1.6-gpu') ```
channels:
- conda-forge dependencies: - python=3.6.2
+- pip=21.3.1
- pip: - azureml-defaults - torch==1.6.0
For more information on creating and using environments, see [Create and use sof
### Create a ScriptRunConfig
-Create a [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on. Any arguments to your training script will be passed via command line if specified in the `arguments` parameter.
+Create a [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on. Any arguments to your training script will be passed via command line if specified in the `arguments` parameter. The following code will configure a single-node PyTorch job.
```python from azureml.core import ScriptRunConfig
migrate Common Questions Discovery Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-discovery-assessment.md
This article answers common questions about discovery, assessment, and dependenc
## What geographies are supported for discovery and assessment with Azure Migrate?
-Review the supported geographies for [public](migrate-support-matrix.md#supported-geographies-public-cloud) and [government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
+Review the supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
## How many servers can I discover with an appliance?
IOPS to be provisioned = (Throughput discovered) *1024/256
### Does the Ultra disk recommendation consider latency?
-No, currently only disk size, total throughput and total IOPS is used for sizing and costing.
+No, currently only disk size, total throughput, and total IOPS are used for sizing and costing.
### I can see M series supports Ultra disk, but in my assessment where Ultra disk was recommended, it says ΓÇ£No VM found for this locationΓÇ¥?
migrate Common Questions Server Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md
To learn more, review this [article](./server-migrate-overview.md) to compare mi
### What geographies are supported for migration with Azure Migrate?
-Review the supported geographies for [public](migrate-support-matrix.md#supported-geographies-public-cloud) and [government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
+Review the supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
### Can I use the same Azure Migrate project to migrate to multiple regions?
migrate Concepts Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md
This article provides an overview of assessments in the [Azure Migrate: Discover
An assessment with the Discovery and assessment tool measures the readiness and estimates the effect of migrating on-premises servers to Azure. > [!NOTE]
-> In Azure Government, review the [supported target](migrate-support-matrix.md#supported-geographies-azure-government) assessment locations. Note that VM size recommendations in assessments will use the VM series specifically for Government Cloud regions. [Learn more](https://azure.microsoft.com/global-infrastructure/services/?regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia&products=virtual-machines) about VM types.
+> In Azure Government, review the [supported target](migrate-support-matrix.md#azure-government) assessment locations. Note that VM size recommendations in assessments will use the VM series specifically for Government Cloud regions. [Learn more](https://azure.microsoft.com/global-infrastructure/services/?regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia&products=virtual-machines) about VM types.
## Types of assessments
migrate Create Manage Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/create-manage-projects.md
Set up a new project in an Azure subscription.
5. In **Create project**, select the Azure subscription and resource group. Create a resource group if you don't have one. 6. In **Project Details**, specify the project name and the geography in which you want to create the project. - The geography is only used to store the metadata gathered from on-premises servers. You can select any target region for migration.
- - Review supported geographies for [public](migrate-support-matrix.md#supported-geographies-public-cloud) and [government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
+ - Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
> [!Note]
migrate How To Create Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-assessment.md
Run an assessment as follows:
1. In **Assessment properties** > **Target Properties**: - In **Target location**, specify the Azure region to which you want to migrate. - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#supported-geographies-azure-government)
+ - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#azure-government)
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
Review the following required permissions and the supported scenarios and tools.
### Supported geographies
-The functionality is now in preview in supported [public cloud](./migrate-support-matrix.md#supported-geographies-public-cloud) and [government cloud geographies.](./migrate-support-matrix.md#supported-geographies-azure-government)
+The functionality is now in preview in supported [public cloud](./migrate-support-matrix.md#public-cloud) and [government cloud geographies.](./migrate-support-matrix.md#azure-government)
### Required permissions
migrate Migrate Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-appliance.md
The Azure Migrate appliance is used in the following scenarios.
**Scenario** | **Tool** | **Used to** | |
-**Discovery and assessment of servers running in VMware environment** | Azure Migrate: Discovery and assessment | Discover servers running in your VMware environment<br/><br/> Perform discovery of installed software inventory, ASP.NET web apps, SQL Server instances and databases, and agentless dependency analysis.<br/><br/> Collect server configuration and performance metadata for assessments.
-**Agentless migration of servers running in VMware environment** | Azure Migrate: Server Migration | Discover servers running in your VMware environment. <br/><br/> Replicate servers without installing any agents on them.
-**Discovery and assessment of servers running in Hyper-V environment** | Azure Migrate: Discovery and assessment | Discover servers running in your Hyper-V environment.<br/><br/> Collect server configuration and performance metadata for assessments.
-**Discovery and assessment of physical or virtualized servers on-premises** | Azure Migrate: Discovery and assessment | Discover physical or virtualized servers on-premises.<br/><br/> Collect server configuration and performance metadata for assessments.
+**Discovery and assessment of servers running in VMware environment** | Azure Migrate: Discovery and assessment | Discover servers running in your VMware environment<br><br> Perform discovery of installed software inventory, ASP.NET web apps, SQL Server instances and databases, and agentless dependency analysis.<br><br> Collect server configuration and performance metadata for assessments.
+**Agentless migration of servers running in VMware environment** | Azure Migrate: Server Migration | Discover servers running in your VMware environment. <br><br> Replicate servers without installing any agents on them.
+**Discovery and assessment of servers running in Hyper-V environment** | Azure Migrate: Discovery and assessment | Discover servers running in your Hyper-V environment.<br><br> Collect server configuration and performance metadata for assessments.
+**Discovery and assessment of physical or virtualized servers on-premises** | Azure Migrate: Discovery and assessment | Discover physical or virtualized servers on-premises.<br><br> Collect server configuration and performance metadata for assessments.
## Deployment methods
The following table summarizes the Azure Migrate appliance requirements for VMwa
**Requirement** | **VMware** | **Permissions** | To access the appliance configuration manager locally or remotely, you need to have a local or domain user account with administrative privileges on the appliance server.
-**Appliance services** | The appliance has the following
-**Project limits** | An appliance can only be registered with a single project.<br/> A single project can have multiple registered appliances.
-**Discovery limits** | An appliance can discover up to 10,000 severs running across multiple vCenter Servers.<br/>A single appliance can connect to up to 10 vCenter Servers.
-**Supported deployment** | Deploy as new server running on vCenter Server using OVA template.<br/><br/> Deploy on an existing server running Windows Server 2016 using PowerShell installer script.
-**OVA template** | Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140333)<br/><br/> Download size is 11.9 GB.<br/><br/> The downloaded appliance template comes with a Windows Server 2016 evaluation license, which is valid for 180 days.<br/>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance using OVA template, or you activate the operating system license of the appliance server.
+**Appliance services** | The appliance has the following
+**Project limits** | An appliance can only be registered with a single project.<br> A single project can have multiple registered appliances.
+**Discovery limits** | An appliance can discover up to 10,000 severs running across multiple vCenter Servers.<br>A single appliance can connect to up to 10 vCenter Servers.
+**Supported deployment** | Deploy as new server running on vCenter Server using OVA template.<br><br> Deploy on an existing server running Windows Server 2016 using PowerShell installer script.
+**OVA template** | Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140333)<br><br> Download size is 11.9 GB.<br><br> The downloaded appliance template comes with a Windows Server 2016 evaluation license, which is valid for 180 days.<br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance using OVA template, or you activate the operating system license of the appliance server.
**OVA verification** | [Verify](tutorial-discover-vmware.md#verify-security) the OVA template downloaded from project by checking the hash values.
-**PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-vmware) on how to deploy an appliance using the PowerShell installer script.<br/><br/>
-**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 32-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance requires internet access, either directly or through a proxy.<br/><br/> If you deploy the appliance using OVA template, you need enough resources on the vCenter Server to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that its running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
-**VMware requirements** | If you deploy the appliance as a server on vCenter Server, it must be deployed on a vCenter Server running 5.5, 6.0, 6.5, 6.7 or 7.0 and an ESXi host running version 5.5 or later.<br/><br/>
+**PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-vmware) on how to deploy an appliance using the PowerShell installer script.<br><br>
+**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 32-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br> The appliance requires internet access, either directly or through a proxy.<br><br> If you deploy the appliance using OVA template, you need enough resources on the vCenter Server to create a server that meets the hardware requirements.<br><br> If you run the appliance on an existing server, make sure that it is running Windows Server 2016, and meets hardware requirements.<br>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+**VMware requirements** | If you deploy the appliance as a server on vCenter Server, it must be deployed on a vCenter Server running 5.5, 6.0, 6.5, 6.7 or 7.0 and an ESXi host running version 5.5 or later.<br><br>
**VDDK (agentless migration)** | To use the appliance for agentless migration of servers, the VMware vSphere VDDK must be installed on the appliance server. ## Appliance - Hyper-V
The following table summarizes the Azure Migrate appliance requirements for VMwa
**Requirement** | **Hyper-V** | **Permissions** | To access the appliance configuration manager locally or remotely, you need to have a local or domain user account with administrative privileges on the appliance server.
-**Appliance services** | The appliance has the following
-**Project limits** | An appliance can only be registered with a single project.<br/> A single project can have multiple registered appliances.
-**Discovery limits** | An appliance can discover up to 5000 servers running in Hyper-V environment.<br/> An appliance can connect to up to 300 Hyper-V hosts.
-**Supported deployment** | Deploy as server running on a Hyper-V host using a VHD template.<br/><br/> Deploy on an existing server running Windows Server 2016 using PowerShell installer script.
-**VHD template** | Zip file that includes a VHD. Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140422).<br/><br/> Download size is 8.91 GB.<br/><br/> The downloaded appliance template comes with a Windows Server 2016 evaluation license, which is valid for 180 days.<br/> If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance server.
+**Appliance services** | The appliance has the following
+**Project limits** | An appliance can only be registered with a single project.<br> A single project can have multiple registered appliances.
+**Discovery limits** | An appliance can discover up to 5000 servers running in Hyper-V environment.<br> An appliance can connect to up to 300 Hyper-V hosts.
+**Supported deployment** | Deploy as server running on a Hyper-V host using a VHD template.<br><br> Deploy on an existing server running Windows Server 2016 using PowerShell installer script.
+**VHD template** | Zip file that includes a VHD. Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140422).<br><br> Download size is 8.91 GB.<br><br> The downloaded appliance template comes with a Windows Server 2016 evaluation license, which is valid for 180 days. If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance server.
**VHD verification** | [Verify](tutorial-discover-hyper-v.md#verify-security) the VHD template downloaded from project by checking the hash values.
-**PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-hyper-v) on how to deploy an appliance using the PowerShell installer script.<br/>
-**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br/><br/> If you run the appliance as a server running on a Hyper-V host, you need enough resources on the host to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that its running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
-**Hyper-V requirements** | If you deploy the appliance with the VHD template, the appliance provided by Azure Migrate is Hyper-V VM version 5.0.<br/><br/> The Hyper-V host must be running Windows Server 2012 R2 or later.
+**PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-hyper-v) on how to deploy an appliance using the PowerShell installer script.<br>
+**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br><br> If you run the appliance as a server running on a Hyper-V host, you need enough resources on the host to create a server that meets the hardware requirements.<br><br> If you run the appliance on an existing server, make sure that it is running Windows Server 2016, and meets hardware requirements.<br>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+**Hyper-V requirements** | If you deploy the appliance with the VHD template, the appliance provided by Azure Migrate is Hyper-V VM version 5.0.<br><br> The Hyper-V host must be running Windows Server 2012 R2 or later.
## Appliance - Physical **Requirement** | **Physical** | **Permissions** | To access the appliance configuration manager locally or remotely, you need to have a local or domain user account with administrative privileges on the appliance server.
-**Appliance services** | The appliance has the following
-**Project limits** | An appliance can only be registered with a single project.<br/> A single project can have multiple registered appliances.<br/>
+**Appliance services** | The appliance has the following
+**Project limits** | An appliance can only be registered with a single project.<br> A single project can have multiple registered appliances.<br>
**Discovery limits** | An appliance can discover up to 1000 physical servers. **Supported deployment** | Deploy on an existing server running Windows Server 2016 using PowerShell installer script.
-**PowerShell script** | Download the script (AzureMigrateInstaller.ps1) in a zip file from the project or from [here](https://go.microsoft.com/fwlink/?linkid=2140334). [Learn more](tutorial-discover-physical.md).<br/><br/> Download size is 85.8 MB.
+**PowerShell script** | Download the script (AzureMigrateInstaller.ps1) in a zip file from the project or from [here](https://go.microsoft.com/fwlink/?linkid=2140334). [Learn more](tutorial-discover-physical.md).<br><br> Download size is 85.8 MB.
**Script verification** | [Verify](tutorial-discover-physical.md#verify-security) the PowerShell installer script downloaded from project by checking the hash values.
-**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage.<br/> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br/><br/> If you run the appliance on an existing server, make sure that its running Windows Server 2016, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
+**Hardware and network requirements** | The appliance should run on server with Windows Server 2016, 16-GB RAM, 8 vCPUs, around 80 GB of disk storage.<br> The appliance needs a static or dynamic IP address, and requires internet access, either directly or through a proxy.<br><br> If you run the appliance on an existing server, make sure that it is running Windows Server 2016, and meets hardware requirements.<br>_(Currently the deployment of appliance is only supported on Windows Server 2016.)_
## URL access
The Azure Migrate appliance needs connectivity to the internet.
**URL** | **Details** | | *.portal.azure.com | Navigate to the Azure portal.
-*.windows.net <br/> *.msftauth.net <br/> *.msauth.net <br/> *.microsoft.com <br/> *.live.com <br/> *.office.com <br/> *.microsoftonline.com <br/> *.microsoftonline-p.com <br/> *.microsoftazuread-sso.com | Used for access control and identity management by Azure Active Directory
+*.windows.net <br> *.msftauth.net <br> *.msauth.net <br> *.microsoft.com <br> *.live.com <br> *.office.com <br> *.microsoftonline.com <br> *.microsoftonline-p.com <br> *.microsoftazuread-sso.com | Used for access control and identity management by Azure Active Directory
management.azure.com | Used for resource deployments and management operations *.services.visualstudio.com | Upload appliance logs used for internal monitoring.
-*.vault.azure.net | Manage secrets in the Azure Key Vault.<br/> Note: Ensure servers to replicate have access to this.
+*.vault.azure.net | Manage secrets in the Azure Key Vault.<br> Note: Ensure servers to replicate have access to this.
aka.ms/* | Allow access to these links; used to download and install the latest updates for appliance services. download.microsoft.com/download | Allow downloads from Microsoft download center. *.servicebus.windows.net | Communication between the appliance and the Azure Migrate service.
-*.discoverysrv.windowsazure.com <br/> *.migration.windowsazure.com | Connect to Azure Migrate service URLs.
-*.hypervrecoverymanager.windowsazure.com | **Used for VMware agentless migration**<br/><br/> Connect to Azure Migrate service URLs.
-*.blob.core.windows.net | **Used for VMware agentless migration**<br/><br/>Upload data to storage for migration.
+*.discoverysrv.windowsazure.com <br> *.migration.windowsazure.com | Connect to Azure Migrate service URLs.
+*.hypervrecoverymanager.windowsazure.com | **Used for VMware agentless migration**<br><br> Connect to Azure Migrate service URLs.
+*.blob.core.windows.net | **Used for VMware agentless migration**<br><br>Upload data to storage for migration.
### Government cloud URLs
download.microsoft.com/download | Allow downloads from Microsoft download center
| | *.portal.azure.us | Navigate to the Azure portal. graph.windows.net | Sign in to your Azure subscription.
-login.microsoftonline.us | Used for access control and identity management by Azure Active Directory
-management.usgovcloudapi.net | Used for resource deployments and management operations
+login.microsoftonline.us | Used for access control and identity management by Azure Active Directory.
+management.usgovcloudapi.net | Used for resource deployments and management operations.
*.services.visualstudio.com | Upload appliance logs used for internal monitoring. *.vault.usgovcloudapi.net | Manage secrets in the Azure Key Vault. aka.ms/* | Allow access to these links; used to download and install the latest updates for appliance services. download.microsoft.com/download | Allow downloads from Microsoft download center. *.servicebus.usgovcloudapi.net | Communication between the appliance and the Azure Migrate service.
-*.discoverysrv.windowsazure.us <br/> *.migration.windowsazure.us | Connect to Azure Migrate service URLs.
-*.hypervrecoverymanager.windowsazure.us | **Used for VMware agentless migration**<br/><br/> Connect to Azure Migrate service URLs.
-*.blob.core.usgovcloudapi.net | **Used for VMware agentless migration**<br/><br/>Upload data to storage for migration.
+*.discoverysrv.windowsazure.us <br> *.migration.windowsazure.us | Connect to Azure Migrate service URLs.
+*.hypervrecoverymanager.windowsazure.us | **Used for VMware agentless migration**<br><br> Connect to Azure Migrate service URLs.
+*.blob.core.usgovcloudapi.net | **Used for VMware agentless migration**<br><br>Upload data to storage for migration.
*.applicationinsights.us | Upload appliance logs used for internal monitoring. + ### Public cloud URLs for private link connectivity The appliance needs access to the following URLs (directly or via proxy) over and above private link access.
The appliance needs access to the following URLs (directly or via proxy) over an
**URL** | **Details** | | *.portal.azure.com | Navigate to the Azure portal.
-*.windows.net <br/> *.msftauth.net <br/> *.msauth.net <br/> *.microsoft.com <br/> *.live.com <br/> *.office.com <br/> *.microsoftonline.com <br/> *.microsoftonline-p.com <br/> *.microsoftazuread-sso.com | Used for access control and identity management by Azure Active Directory
+*.windows.net <br> *.msftauth.net <br> *.msauth.net <br> *.microsoft.com <br> *.live.com <br> *.office.com <br> *.microsoftonline.com <br> *.microsoftonline-p.com <br> *.microsoftazuread-sso.com | Used for access control and identity management by Azure Active Directory
management.azure.com | Used for resource deployments and management operations *.services.visualstudio.com (optional) | Upload appliance logs used for internal monitoring. aka.ms/* (optional) | Allow access to these links; used to download and install the latest updates for appliance services. download.microsoft.com/download | Allow downloads from Microsoft download center.
-*.servicebus.windows.net | **Used for VMware agentless migration**<br/><br/> Communication between the appliance and the Azure Migrate service.
-*.hypervrecoverymanager.windowsazure.com | **Used for VMware agentless migration**<br/><br/> Connect to Azure Migrate service URLs.
-*.blob.core.windows.net | **Used for VMware agentless migration**<br/><br/>Upload data to storage for migration. <br/>This is optional and is not required if the storage accounts (both cache storage account and gateway storage account) have a private endpoint attached.
+*.servicebus.windows.net | **Used for VMware agentless migration**<br><br> Communication between the appliance and the Azure Migrate service.
+*.hypervrecoverymanager.windowsazure.com | **Used for VMware agentless migration**<br><br> Connect to Azure Migrate service URLs.
+*.blob.core.windows.net | **Used for VMware agentless migration**<br><br>Upload data to storage for migration. <br>This is optional and is not required if the storage accounts (both cache storage account and gateway storage account) have a private endpoint attached.
+
+### Azure China 21Vianet (Azure China) URLs
+
+**URL** | **Details**
+ | |
+*.portal.azure.cn | Navigate to the Azure portal.
+graph.chinacloudapi.cn | Sign in to your Azure subscription.
+login.microsoftonline.cn | Used for access control and identity management by Azure Active Directory.
+management.chinacloudapi.cn | Used for resource deployments and management operations
+*.services.visualstudio.com | Upload appliance logs used for internal monitoring.
+*.vault.chinacloudapi.cn | Manage secrets in the Azure Key Vault.
+aka.ms/* | Allow access to these links; used to download and install the latest updates for appliance services.
+download.microsoft.com/download | Allow downloads from Microsoft download center.
+*.servicebus.chinacloudapi.cn | Communication between the appliance and the Azure Migrate service.
+*.discoverysrv.cn2.windowsazure.cn *.cn2.prod.migration.windowsazure.cn | Connect to Azure Migrate service URLs.
+*.cn2.hypervrecoverymanager.windowsazure.cn | **Used for VMware agentless migration.** Connect to Azure Migrate service URLs.
+*.blob.core.chinacloudapi.cn | **Used for VMware agentless migration.**<br><br>Upload data to storage for migration.
+*.applicationinsights.azure.cn | Upload appliance logs used for internal monitoring.
## Collected data - VMware
Here's the software inventory data that the appliance collects from each Windows
**Data** | **Registry Location** | **Key** | |
-Application Name | HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* <br/> HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | DisplayName
-Version | HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* <br/> HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | DisplayVersion
-Provider | HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* <br/> HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | Publisher
+Application Name | HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* <br> HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | DisplayName
+Version | HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* <br> HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | DisplayVersion
+Provider | HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* <br> HKLM:\Software\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\* | Publisher
#### Windows server features data
Here's the web apps configuration data that the appliance collects from each Win
**Entity** | **Data** |
-Web apps | Application Name <br/>Configuration Path <br/>Frontend Bindings <br/>Enabled Frameworks <br/>Hosting Web Server<br/>Sub-Applications and virtual applications <br/>Application Pool name <br/>Runtime version <br/>Managed pipeline mode
-Web server | Server Name <br/>Server Type (currently only IIS) <br/>Configuration Location <br/>Version <br/>FQDN <br/>Credentials used for discovery <br/>List of Applications
+Web apps | Application Name <br>Configuration Path <br>Frontend Bindings <br>Enabled Frameworks <br>Hosting Web Server<br>Sub-Applications and virtual applications <br>Application Pool name <br>Runtime version <br>Managed pipeline mode
+Web server | Server Name <br>Server Type (currently only IIS) <br>Configuration Location <br>Version <br>FQDN <br>Credentials used for discovery <br>List of Applications
#### Windows server operating system data
Here's the operating system data that the appliance collects from each Linux ser
**Data** | **Commands** |
-Name <br/> version | Gathered from one or more of the following files:<br/> <br/>/etc/os-release <br> /usr/lib/os-release <br> /etc/enterprise-release <br> /etc/redhat-release <br> /etc/oracle-release <br> /etc/SuSE-release <br> /etc/lsb-release <br> /etc/debian_version
+Name <br> version | Gathered from one or more of the following files:<br> <br>/etc/os-release <br> /usr/lib/os-release <br> /etc/enterprise-release <br> /etc/redhat-release <br> /etc/oracle-release <br> /etc/SuSE-release <br> /etc/lsb-release <br> /etc/debian_version
Architecture | uname ### SQL Server instances and databases data
Here's the server performance data that the appliance collects and sends to Azur
**Performance counter class** | **Counter** | **Assessment impact** | | Hyper-V Hypervisor Virtual Processor | % Guest Run Time | Recommended server size/cost
-Hyper-V Dynamic Memory Server | Current Pressure (%)<br/> Guest Visible Physical Memory (MB) | Recommended server size/cost
+Hyper-V Dynamic Memory Server | Current Pressure (%)<br> Guest Visible Physical Memory (MB) | Recommended server size/cost
Hyper-V Virtual Storage Device | Read Bytes/Second | Calculation for disk size, storage cost, server size Hyper-V Virtual Storage Device | Write Bytes/Second | Calculation for disk size, storage cost, server size Hyper-V Virtual Network Adapter | Bytes Received/Second | Calculation for server size
Here's the full list of Linux server metadata that the appliance collects and se
FQDN | cat /proc/sys/kernel/hostname, hostname -f Processor core count | cat/proc/cpuinfo \| awk '/^processor/{print $3}' \| wc -l Memory allocated | cat /proc/meminfo \| grep MemTotal \| awk '{printf "%.0f", $2/1024}'
-BIOS serial number | lshw \| grep "serial:" \| head -n1 \| awk '{print $2}' <br/> /usr/sbin/dmidecode -t 1 \| grep 'Serial' \| awk '{ $1="" ; $2=""; print}'
+BIOS serial number | lshw \| grep "serial:" \| head -n1 \| awk '{print $2}' <br> /usr/sbin/dmidecode -t 1 \| grep 'Serial' \| awk '{ $1="" ; $2=""; print}'
BIOS GUID | cat /sys/class/dmi/id/product_uuid Boot type | [ -d /sys/firmware/efi ] && echo EFI \|\| echo BIOS
-OS name/version | We access these files for the OS version and name:<br/><br/> /etc/os-release<br/> /usr/lib/os-release <br/> /etc/enterprise-release <br/> /etc/redhat-release<br/> /etc/oracle-release<br/> /etc/SuSE-release<br/> /etc/lsb-release <br/> /etc/debian_version
+OS name/version | We access these files for the OS version and name:<br><br> /etc/os-release<br> /usr/lib/os-release <br> /etc/enterprise-release <br> /etc/redhat-release<br> /etc/oracle-release<br> /etc/SuSE-release<br> /etc/lsb-release <br> /etc/debian_version
OS architecture | uname -m Disk count | fdisk -l \| egrep 'Disk.*bytes' \| awk '{print $2}' \| cut -f1 -d ':' Boot disk | df /boot \| sed -n 2p \| awk '{print $1}'
migrate Migrate Replication Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-replication-appliance.md
Free disk space (cache) | 600 GB
Free disk space (retention disk) | 600 GB **Software settings** | Operating system | Windows Server 2016 or Windows Server 2012 R2
-License | The appliance comes with a Windows Server 2016 evaluation license, which is valid for 180 days.<br/><br/> If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance VM.
+License | The appliance comes with a Windows Server 2016 evaluation license, which is valid for 180 days. <br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance VM.
Operating system locale | English (en-us) TLS | TLS 1.2 should be enabled. .NET Framework | .NET Framework 4.6 or later should be installed on the machine (with strong cryptography enabled.
-MySQL | MySQL should be installed on the appliance.<br/> MySQL should be installed. You can install manually, or Azure Migrate can install it during appliance deployment.
+MySQL | MySQL should be installed on the appliance. <br> MySQL should be installed. You can install manually, or Azure Migrate can install it during appliance deployment.
Other apps | Don't run other apps on the replication appliance. Windows Server roles | Don't enable these roles: <br> - Active Directory Domain Services <br>- Internet Information Services <br> - Hyper-V
-Group policies | Don't enable these group policies: <br> - Prevent access to the command prompt. <br> - Prevent access to registry editing tools. <br> - Trust logic for file attachments. <br> - Turn on Script Execution. <br> [Learn more](/previous-versions/windows/it-pro/windows-7/gg176671(v=ws.10))
+Group policies | Don't enable these group policies: <br> - Prevent access to the command prompt. <br> - Prevent access to registry editing tools. <br> - Trust logic for file attachments. <br> - Turn on Script Execution. <br> [Learn more](/previous-versions/windows/it-pro/windows-7/gg176671(v=ws.10)).
IIS | - No pre-existing default website <br> - No pre-existing website/application listening on port 443 <br>- Enable [anonymous authentication](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731244(v=ws.10)) <br> - Enable [FastCGI](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753077(v=ws.10)) setting **Network settings** | IP address type | Static
MySQL must be installed on the replication appliance machine. It can be installe
**Method** | **Details** |
-Download and install manually | Download MySQL application & place it in the folder C:\Temp\ASRSetup, then install manually.<br/> When you set up the appliance MySQL will show as already installed.
+Download and install manually | Download MySQL application & place it in the folder C:\Temp\ASRSetup, then install manually.<br> When you set up the appliance, MySQL will show as already installed.
Without online download | Place the MySQL installer application in the folder C:\Temp\ASRSetup. When you install the appliance and select download and install MySQL, setup will use the installer you added. Download and install in Azure Migrate | When you install the appliance and are prompted for MySQL, select **Download and install**.
The replication appliance needs access to these URLs in the Azure public cloud.
**URL** | **Details** |
-\*.backup.windowsazure.com | Used for replicated data transfer and coordination
-\*.store.core.windows.net | Used for replicated data transfer and coordination
-\*.blob.core.windows.net | Used to access storage account that stores replicated data
-\*.hypervrecoverymanager.windowsazure.com | Used for replication management operations and coordination
-https:\//management.azure.com | Used for replication management operations and coordination
-*.services.visualstudio.com | Used for logging purposes (It is optional)
+*.backup.windowsazure.com | Used for replicated data transfer and coordination
+*.store.core.windows.net | Used for replicated data transfer and coordination
+*.blob.core.windows.net | Used to access storage account that stores replicated data
+*.hypervrecoverymanager.windowsazure.com | Used for replication management operations and coordination
+https://management.azure.com | Used for replication management operations and coordination.
+*.services.visualstudio.com | Used for logging purposes. (It is optional)
time.windows.com | Used to check time synchronization between system and global time.
-https:\//login.microsoftonline.com <br/> https:\//secure.aadcdn.microsoftonline-p.com <br/> https:\//login.live.com <br/> https:\//graph.windows.net <br/> https:\//login.windows.net <br/> https:\//www.live.com <br/> https:\//www.microsoft.com | Appliance setup needs access to these URLs. They are used for access control and identity management by Azure Active Directory
-https:\//dev.mysql.com/get/Downloads/MySQLInstaller/mysql-installer-community-5.7.20.0.msi | To complete MySQL download. In a few regions, the download might be redirected to the CDN URL. Ensure that the CDN URL is also allowed if needed.
-
+https://login.microsoftonline.com <br> https://secure.aadcdn.microsoftonline-p.com <br> https://login.live.com <br> https://graph.windows.net <br> https://login.windows.net <br> https://www.live.com <br> https://www.microsoft.com | Appliance setup needs access to these URLs. They are used for access control and identity management by Azure Active Directory.
+https://dev.mysql.com/get/Downloads/MySQLInstaller/mysql-installer-community-5.7.20.0.msi | To complete MySQL download. In a few regions, the download might be redirected to the CDN URL. Ensure that the CDN URL is also allowed if needed.
## Azure Government URL access
The replication appliance needs access to these URLs in Azure Government.
**URL** | **Details** |
-\*.backup.windowsazure.us | Used for replicated data transfer and coordination
-\*.store.core.windows.net | Used for replicated data transfer and coordination
-\*.blob.core.windows.net | Used to access storage account that stores replicated data
-\*.hypervrecoverymanager.windowsazure.us | Used for replication management operations and coordination
-https:\//management.usgovcloudapi.net | Used for replication management operations and coordination
+*.backup.windowsazure.us | Used for replicated data transfer and coordination
+*.store.core.windows.net | Used for replicated data transfer and coordination
+*.blob.core.windows.net | Used to access storage account that stores replicated data
+*.hypervrecoverymanager.windowsazure.us | Used for replication management operations and coordination
+https://management.usgovcloudapi.net | Used for replication management operations and coordination
*.services.visualstudio.com | Used for logging purposes (It is optional) time.nist.gov | Used to check time synchronization between system and global time.
-https:\//login.microsoftonline.com <br/> https:\//secure.aadcdn.microsoftonline-p.com <br/> https:\//login.live.com <br/> https:\//graph.windows.net <br/> https:\//login.windows.net <br/> https:\//www.live.com <br/> https:\//www.microsoft.com | Appliance setup with OVA needs access to these URLs. They are used for access control and identity management by Azure Active Directory.
-https:\//dev.mysql.com/get/Downloads/MySQLInstaller/mysql-installer-community-5.7.20.0.msi | To complete MySQL download. In a few regions, the download might be redirected to the CDN URL. Ensure that the CDN URL is also allowed if needed.
+https://login.microsoftonline.com <br> https://secure.aadcdn.microsoftonline-p.com <br> https://login.live.com <br> https://graph.windows.net <br> https://login.windows.net <br> https://www.live.com <br> https://www.microsoft.com | Appliance setup with OVA needs access to these URLs. They are used for access control and identity management by Azure Active Directory.
+https://dev.mysql.com/get/Downloads/MySQLInstaller/mysql-installer-community-5.7.20.0.msi | To complete MySQL download. In a few regions, the download might be redirected to the CDN URL. Ensure that the CDN URL is also allowed if needed.
>[!Note] >
-> If you Migrate project has private endpoint connectivity, you will need access to following URLs over and above private link access:
+> If your Migrate project has private endpoint connectivity, you will need access to following URLs over and above private link access:
> - *.blob.core.windows.com - To access storage account that stores replicated data. This is optional and is not required if the storage account has a private endpoint attached.
-> - https:\//management.azure.com for replication management operations and coordination.
->- https:\//login.microsoftonline.com <br/>https:\//login.windows.net <br/> https:\//www.live.com _and_ <br/> https:\//www.microsoft.com for access control and identity management by Azure Active Directory
+> - https://management.azure.com for replication management operations and coordination.
+>- https://login.microsoftonline.com <br>https://login.windows.net <br> https://www.live.com and <br> https://www.microsoft.com for access control and identity management by Azure Active Directory
+
+## Azure China 21Vianet (Azure China) URL access
+
+The replication appliance needs access to these URLs.
+
+**URL** | **Details**
+ |
+*.backup.windowsazure.cn | Used for replicated data transfer and coordination.
+*.store.core.chinacloudapi.cn | Used for replicated data transfer and coordination.
+*.blob.core.chinacloudapi.cn | Used to access storage account that stores replicated data.
+*.hypervrecoverymanager.windowsazure.cn | Used for replication management operations and coordination.
+https://management.chinacloudapi.cn | Used for replication management operations and coordination.
+*.services.visualstudio.com | Used for logging purposes (It is optional).
+time.windows.cn | Used to check time synchronization between system and global time.
+https://login.microsoftonline.cn <br/> https://secure.aadcdn.microsoftonline-p.cn <br/> https:\//login.live.com <br/> https://graph.chinacloudapi.cn <br/> https://login.chinacloudapi.cn <br/> https://www.live.com <br/> https://www.microsoft.com | Appliance setup with OVA needs access to these URLs. They are used for access control and identity management by Azure Active Directory.
+https://dev.mysql.com/get/Downloads/MySQLInstaller/mysql-installer-community-5.7.20.0.msi | To complete MySQL download. In a few regions, the download might be redirected to the CDN URL. Ensure that the CDN URL is also allowed if needed.
## Port access
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md
You can select up to 10 VMs at once for replication. If you want to migrate more
| **Linux boot** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br/> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks. | | **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. | | **UEFI - Secure boot** | Not supported for migration.|
-| **Disk size** | Up to 2 TB OS disk, 4 TB for the data.|
+| **Disk size** | Up to 2 TB OS disk for gen 1 VM; up to 4 TB OS disk for gen 2 VM; 32 TB for data disks. </br></br> For existing Azure Migrate projects, you may need to upgrade the replication provider on the Hyper-V host to the latest version to replicate large disks up to 32 TB.|
| **Disk number** | A maximum of 16 disks per VM.| | **Encrypted disks/volumes** | Not supported for migration.| | **RDM/passthrough disks** | Not supported for migration.|
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md
For Azure Migrate to work with Azure you need these permissions before you start
**Task** | **Permissions** | **Details** | | Create a project | Your Azure account needs permissions to create a project. | Set up for [VMware](./tutorial-discover-vmware.md#prepare-an-azure-user-account), [Hyper-V](./tutorial-discover-hyper-v.md#prepare-an-azure-user-account), or [physical servers](./tutorial-discover-physical.md#prepare-an-azure-user-account).
-Register the Azure Migrate appliance| Azure Migrate uses a lightweight [Azure Migrate appliance](migrate-appliance.md) to discover and assess servers with Azure Migrate: Discovery and assessment, and to run [agentless migration](server-migrate-overview.md) of VMware VMs with Azure Migrate: Server Migration. This appliance discovers servers, and sends metadata and performance data to Azure Migrate.<br/><br/> During registration, register providers (Microsoft.OffAzure, Microsoft.Migrate, and Microsoft.KeyVault) are registered with the subscription chosen in the appliance, so that the subscription works with the resource provider. To register, you need Contributor or Owner access on the subscription.<br/><br/> **VMware**-During onboarding, Azure Migrate creates two Azure Active Directory (Azure AD) apps. The first app communicates between the appliance agents and the Azure Migrate service. The app doesn't have permissions to make Azure resource management calls or have Azure RBAC access for resources. The second app accesses an Azure Key Vault created in the user subscription for agentless VMware migration only. In agentless migration, Azure Migrate creates a Key Vault to manage access keys to the replication storage account in your subscription. It has Azure RBAC access on the Azure Key Vault (in the customer tenant) when discovery is initiated from the appliance.<br/><br/> **Hyper-V**-During onboarding. Azure Migrate creates one Azure AD app. The app communicates between the appliance agents and the Azure Migrate service. The app doesn't have permissions to make Azure resource management calls or have Azure RBAC access for resources. | Set up for [VMware](./tutorial-discover-vmware.md#prepare-an-azure-user-account), [Hyper-V](./tutorial-discover-hyper-v.md#prepare-an-azure-user-account), or [physical servers](./tutorial-discover-physical.md#prepare-an-azure-user-account).
+Register the Azure Migrate appliance| Azure Migrate uses a lightweight [Azure Migrate appliance](migrate-appliance.md) to discover and assess servers with Azure Migrate: Discovery and assessment, and to run [agentless migration](server-migrate-overview.md) of VMware VMs with Azure Migrate: Server Migration. This appliance discovers servers, and sends metadata and performance data to Azure Migrate.<br><br> During registration, register providers (Microsoft.OffAzure, Microsoft.Migrate, and Microsoft.KeyVault) are registered with the subscription chosen in the appliance, so that the subscription works with the resource provider. To register, you need Contributor or Owner access on the subscription.<br><br> **VMware**-During onboarding, Azure Migrate creates two Azure Active Directory (Azure AD) apps. The first app communicates between the appliance agents and the Azure Migrate service. The app doesn't have permissions to make Azure resource management calls or have Azure RBAC access for resources. The second app accesses an Azure Key Vault created in the user subscription for agentless VMware migration only. In agentless migration, Azure Migrate creates a Key Vault to manage access keys to the replication storage account in your subscription. It has Azure RBAC access on the Azure Key Vault (in the customer tenant) when discovery is initiated from the appliance.<br><br> **Hyper-V**-During onboarding. Azure Migrate creates one Azure AD app. The app communicates between the appliance agents and the Azure Migrate service. The app doesn't have permissions to make Azure resource management calls or have Azure RBAC access for resources. | Set up for [VMware](./tutorial-discover-vmware.md#prepare-an-azure-user-account), [Hyper-V](./tutorial-discover-hyper-v.md#prepare-an-azure-user-account), or [physical servers](./tutorial-discover-physical.md#prepare-an-azure-user-account).
Create a key vault for VMware agentless migration | To migrate VMware VMs with agentless Azure Migrate: Server Migration, Azure Migrate creates a Key Vault to manage access keys to the replication storage account in your subscription. To create the vault, you set permissions (Owner, or Contributor and User Access Administrator) on the resource group where the project resides. | [Set up](./tutorial-discover-vmware.md#prepare-an-azure-user-account) permissions.
-## Supported geographies (Public cloud)
+## Supported geographies
+
+### Public cloud
You can create a project in many geographies in the public cloud.
United States | Central US or West US 2
> [!NOTE] > For Switzerland geography, Switzerland West is only available for REST API users and need an approved subscription.
-## Supported geographies (Azure Government)
+### Azure Government
**Task** | **Geography** | **Details** | |
Create project | United States | Metadata is stored in US Gov Arizona, US Gov Vi
Target assessment | United States | Target regions: US Gov Arizona, US Gov Virginia, US Gov Texas Target replication | United States | Target regions: US DoD Central, US DoD East, US Gov Arizona, US Gov Iowa, US Gov Texas, US Gov Virginia
+### Azure China 21Vianet (Azure China)
+
+**Geography** | **Metadata storage location**
+ |
+Azure China | China North 2
+ ## VMware assessment and migration [Review](migrate-support-matrix-vmware.md) the Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration support matrix for VMware VMs.
migrate Resources Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/resources-faq.md
Choose your tool based on what you want to do:
## Which geographies are supported?
-Review the supported geographies for [public](migrate-support-matrix.md#supported-geographies-public-cloud) and [government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
+Review the supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
## How do I get started?
migrate Troubleshoot Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-project.md
Finding an existing Azure Migrate project depends upon whether you're using the
## Can't find a geography
-You can create an Azure Migrate project in supported geographies for [public](migrate-support-matrix.md#supported-geographies-public-cloud) and [government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
+You can create an Azure Migrate project in supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
## What are VM limits?
migrate Tutorial Assess Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-aws.md
Run an assessment as follows:
1. In **Assessment properties** > **Target Properties**: - In **Target location**, specify the Azure region to which you want to migrate. - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#supported-geographies-azure-government)
+ - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#azure-government)
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
migrate Tutorial Assess Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-gcp.md
Run an assessment as follows:
1. In **Assessment properties** > **Target Properties**: - In **Target location**, specify the Azure region to which you want to migrate. - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#supported-geographies-azure-government)
+ - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#azure-government)
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
migrate Tutorial Assess Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-hyper-v.md
Run an assessment as follows:
1. In **Assessment properties** > **Target Properties**: - In **Target location**, specify the Azure region to which you want to migrate. - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#supported-geographies-azure-government)
+ - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#azure-government)
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
migrate Tutorial Assess Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-physical.md
Run an assessment as follows:
1. In **Assessment properties** > **Target Properties**: - In **Target location**, specify the Azure region to which you want to migrate. - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#supported-geographies-azure-government)
+ - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#azure-government)
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
migrate Tutorial Assess Vmware Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-vmware-azure-vm.md
Run an assessment as follows:
1. In **Assessment properties** > **Target Properties**: - In **Target location**, specify the Azure region to which you want to migrate. - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#supported-geographies-azure-government)
+ - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#azure-government)
- In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it.
migrate Tutorial Discover Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-aws.md
Set up a new project.
2. Under **Services**, select **Azure Migrate**. 3. In **Overview**, select **Create project**. 5. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one.
-6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#supported-geographies-public-cloud) and [government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
+6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
![Boxes for project name and region](./media/tutorial-discover-aws/new-project.png)
migrate Tutorial Discover Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-gcp.md
Set up a new project.
2. Under **Services**, select **Azure Migrate**. 3. In **Overview**, select **Create project**. 4. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one.
-5. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#supported-geographies-public-cloud) and [government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
+5. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
![Boxes for project name and region](./media/tutorial-discover-gcp/new-project.png)
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
Set up a new project.
2. Under **Services**, select **Azure Migrate**. 3. In **Overview**, select **Create project**. 5. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one.
-6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#supported-geographies-public-cloud) and [government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
+6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
![Boxes for project name and region](./media/tutorial-discover-hyper-v/new-project.png)
migrate Tutorial Discover Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-import.md
Set up a new Azure Migrate project if you don't have one.
2. Under **Services**, select **Azure Migrate**. 3. In **Overview**, select **Create project**. 5. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one.
-6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#supported-geographies-public-cloud) and [government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
+6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
![Boxes for project name and region](./media/tutorial-discover-import/new-project.png) > [!Note]
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
Set up a new project.
2. Under **Services**, select **Azure Migrate**. 3. In **Overview**, select **Create project**. 5. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one.
-6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#supported-geographies-public-cloud) and [government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
+6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
![Boxes for project name and region](./media/tutorial-discover-physical/new-project.png)
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
To set up a new project:
1. In **Overview**, select one of the following options, depending on your migration goals: **Servers, databases and web apps**, **SQL Server (only)**, or **Explore more scenarios**. 1. Select **Create project**. 1. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one.
-1. In **Project Details**, specify the project name and the geography where you want to create the project. Review [supported geographies for public clouds](migrate-support-matrix.md#supported-geographies-public-cloud) and [supported geographies for government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
+1. In **Project Details**, specify the project name and the geography where you want to create the project. Review [supported geographies for public clouds](migrate-support-matrix.md#public-cloud) and [supported geographies for government clouds](migrate-support-matrix.md#azure-government).
:::image type="content" source="./media/tutorial-discover-vmware/new-project.png" alt-text="Screenshot that shows how to add project details for a new Azure Migrate project.":::
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (February 2022)
+- Azure Migrate is now supported in Azure China. [Learn more](/azure/chin#azure-operations-in-china).
+ ## Update (December 2021) - Support to discover, assess, and migrate VMs from multiple vCenter Servers using a single Azure Migrate appliance. [Learn more](tutorial-discover-vmware.md#start-continuous-discovery). - Simplified [Azure VMware Solution assessment](./concepts-azure-vmware-solution-assessment-calculation.md) experience to understand sizing assumptions, resource utilization and limiting factor for migrating on-premises VMware VMs to Azure VMware Solution. Other enhancements added:
- Support for storage utilization parameter in storage sizing logic (only for discovery via a .csv file) ## Update (October 2021)-- Azure Migrate now supports new public cloud geographies and regions. [Learn more](migrate-support-matrix.md#supported-geographies-public-cloud)
+- Azure Migrate now supports new public cloud geographies and regions. [Learn more](migrate-support-matrix.md#public-cloud).
## Update (September 2021)-- Discover, assess, and migrate servers over a private network using [Azure Private Link.](../private-link/private-endpoint-overview.md) is now in preview in supported [government cloud geographies.](migrate-support-matrix.md#supported-geographies-azure-government) [Learn more](how-to-use-azure-migrate-with-private-endpoints.md)
+- Discover, assess, and migrate servers over a private network using [Azure Private Link.](../private-link/private-endpoint-overview.md) is now in preview in supported [government cloud geographies.](migrate-support-matrix.md#azure-government) [Learn more](how-to-use-azure-migrate-with-private-endpoints.md)
- Support to tag and add custom names to resources for agentless VMware VM migrations using PowerShell. - Azure Migrate appliance: Option to remove servers from the physical servers discovery list.
## Update (June 2021) -- Azure Migrate now supports new public cloud geographies and regions. [Learn more](migrate-support-matrix.md#supported-geographies-public-cloud)
+- Azure Migrate now supports new public cloud geographies and regions. [Learn more](migrate-support-matrix.md#public-cloud)
- Azure Migrate allows you to register servers running SQL server with SQL VM RP during replication to automatically install SQL IaaS Agent Extension. This feature is available for agentless VMware, agentless Hyper-V, and agent-based migrations. - Import CSV file for assessment now supports up to 20 disks. Earlier it was limited to eight disks per server.
For more information, see [ASP.NET app containerization and migration to Azure K
## Update (June 2020) -- Assessments for migrating on-premises VMware VMs to [Azure VMware Solution (AVS)](./concepts-azure-vmware-solution-assessment-calculation.md) is now supported. [Learn more](how-to-create-azure-vmware-solution-assessment.md)
+- Assessments for migrating on-premises VMware VMs to [Azure VMware Solution (AVS)](./concepts-azure-vmware-solution-assessment-calculation.md) are now supported. [Learn more](how-to-create-azure-vmware-solution-assessment.md)
- Support for multiple credentials on appliance for physical server discovery. - Support to allow Azure login from appliance for tenant where tenant restriction has been configured.
Azure Migrate supports deployments in Azure Government.
- You can discover and assess VMware VMs, Hyper-V VMs, and physical servers. - You can migrate VMware VMs, Hyper-V VMs, and physical servers to Azure. - For VMware migration, you can use agentless or agent-based migration. [Learn more](server-migrate-overview.md).-- [Review](migrate-support-matrix.md#supported-geographies-azure-government) supported geographies and regions for Azure Government.
+- [Review](migrate-support-matrix.md#azure-government) supported geographies and regions for Azure Government.
- [Agent-based dependency analysis](concepts-dependency-visualization.md#agent-based-analysis) isn't supported in Azure Government. - Features in preview are supported in Azure Government, [agentless dependency analysis](concepts-dependency-visualization.md#agentless-analysis), and [application discovery](how-to-discover-applications.md).
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
Using the [Azure portal](https://portal.azure.com):
4. Select extensions you wish to allow-list. :::image type="content" source="./media/concepts-extensions/allow-list.png" alt-text=" Screenshot showing Azure Database for PostgreSQL - allow-listing extensions for installation ":::
+Using [Azure CLI](https://docs.microsoft.com/cli/azure/):
+
+ You can allow-list extensions via CLI parameter set [command]( https://docs.microsoft.com/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true).
+
+ ```bash
+az postgres flexible-server parameter set --resource-group <your resource group> --server-name <your server name> --subscription <your subscription id> --name azure.extensions --value <extension name>,<extension name>
+ ```
+
+ Using [ARM Template](https://docs.microsoft.com/azure/azure-resource-manager/templates/):
+ Example below allow-lists extensions dblink, dict_xsyn, pg_buffercache on server mypostgreserver
+```json
+{
+
+ "$schema": https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#,
+
+ "contentVersion": "1.0.0.0",
+
+ "parameters": {
+
+ "flexibleServers_name": {
+
+ "defaultValue": "mypostgreserver",
+
+ "type": "String"
+
+ },
+
+ "azure_extensions_set_value": {
+
+ "defaultValue": " dblink,dict_xsyn,pg_buffercache",
+
+ "type": "String"
+
+ }
+
+ },
+
+ "variables": {},
+
+ "resources": [
+
+ {
+
+ "type": "Microsoft.DBforPostgreSQL/flexibleServers/configurations",
+
+ "apiVersion": "2021-06-01",
+
+ "name": "[concat(parameters('flexibleServers_name'), '/azure.extensions')]",
+
+ "properties": {
+
+ "value": "[parameters('azure_extensions_set_value')]",
+
+ "source": "user-override"
+
+ }
+
+ }
+
+ ]
+
+}
++
+ ```
+ After extensions are allow-listed, these must be installed in your database before you can use them. To install a particular extension, you should run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command. This command loads the packaged objects into your database.
postgresql Concepts Distributed Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-distributed-data.md
Making the right choice is important for performance and functionality.
### Type 2: Reference tables
-A reference table is a type of distributed table whose entire
-contents are concentrated into a single shard. The shard is replicated on every worker. Queries on any worker can access the reference information locally, without the network overhead of requesting rows from another node. Reference tables have no distribution column
-because there's no need to distinguish separate shards per row.
+A reference table is a type of distributed table whose entire contents are
+concentrated into a single shard. The shard is replicated on every worker and
+the coordinator. Queries on any worker can access the reference information
+locally, without the network overhead of requesting rows from another node.
+Reference tables have no distribution column because there's no need to
+distinguish separate shards per row.
Reference tables are typically small and are used to store data that's relevant to queries running on any worker node. An example is enumerated
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
Over a private-endpoint connection, a private-link resource owner can:
An alias is a unique moniker that's generated when a service owner creates a private-link service behind a standard load balancer. Service owners can share this alias offline with consumers of your service.
-The consumers can request a connection to a private-link service by using either the resource URI or the alias. To connect by using the alias, create a private endpoint by using the manual connection approval method. To use the manual connection approval method, set the manual request parameter to *True* during the private-endpoint create flow. For more information, see [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint) and [az network private-endpoint create](/cli/azure/network/private-endpoint#az_network_private_endpoint_create).
+The consumers can request a connection to a private-link service by using either the resource URI or the alias. To connect by using the alias, create a private endpoint by using the manual connection approval method. To use the manual connection approval method, set the manual request parameter to *True* during the private-endpoint create flow. For more information, see [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint) and [az network private-endpoint create](/cli/azure/network/private-endpoint#az-network-private-endpoint-create).
> [!NOTE] > This manual request can be auto approved if the consumer's subscription is allow-listed on the provider side. To learn more, go to [controlling service access](./private-link-service-overview.md#control-service-access).
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
Azure Purview uses a set of predefined roles to control who can access what with
- **Collection administrator** - a role for users that will need to assign roles to other users in Azure Purview or manage collections. Collection admins can add users to roles on collections where they're admins. They can also edit collections, their details, and add subcollections. - **Data curators** - a role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets. - **Data readers** - a role that provides read-only access to data assets, classifications, classification rules, collections and glossary terms.-- **Data source administrators** - a role that allows a user to manage data sources and scans. If a user is granted only to **Data source admin** role on a given data source, they can run new scans using an existing scan rule. To create new scan rules, the user must be also granted as either **Data reader** or **Data curator** roles.
+- **Data source administrator** - a role that allows a user to manage data sources and scans. If a user is granted only to **Data source admin** role on a given data source, they can run new scans using an existing scan rule. To create new scan rules, the user must be also granted as either **Data reader** or **Data curator** roles.
- **Policy author (Preview)** - a role that allows a user to view, update, and delete Azure Purview policies through the policy management app within Azure Purview. > [!NOTE]
purview Concept Best Practices Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-glossary.md
Title: Azure Purview glossary best practices description: This article provides examples of Azure Purview glossary best practices.-+
The adoption by your organization of the business glossary will depend on you pr
## Why is a common business glossary needed? Without a common business glossary an organization's performance, culture, operations, and strategy often will adversely hinder the business. You will observe, in this hindrance, a condition in which cultural differences arise grounded in an inconsistent business language. These inconsistencies about the business language are communicated between team members and prevents them from leveraging their relevant data assets as a competitive advantage.
-You will also observe when there are language barriers, in which, most organizations will spend more time pursuing non-productive and non-collaborative activates as they need to rely on more detailed interactions to reach the same meaning and understanding for their data assets.
+You will also observe when there are language barriers, in which, most organizations will spend more time pursuing non-productive and non-collaborative activities as they need to rely on more detailed interactions to reach the same meaning and understanding for their data assets.
## Recommendations for implementing new glossary terms
When building new term templates in Azure Purview, review the following consider
- When importing terms from a .CSV file, be sure that terms already existing in Azure Purview are intended to be updated. When using the import feature, Azure Purview will overwrite existing terms. - Before importing terms, test the import in a lab environment to ensure that no unexpected results occur, such as duplicate terms. - The email address for Stewards and Experts should be the primary address of the user from the Azure Active Directory group. Alternate email, user principal name and non-Azure Active Directory emails are not yet supported.-- Glossary terms provide fours status: draft, approved, expire, alert. Draft is not officially implemented, approved is official/stand/approved for production, expired means should no longer be used, alert need to pay more attention.
+- Glossary terms provide fours status: Draft, Approved, Expired, Alert. Draft is not officially implemented, Approved is official/stand/approved for production, Expired means should no longer be used, Alert need to pay more attention.
For more information, see [Create, import, and export glossary terms](./how-to-create-import-export-glossary.md) ## Recommendations for exporting glossary terms
Exporting terms may be useful in Azure Purview account to account, Backup, or Di
- Manually, using Azure Purview Studio. - Using Bulk Edit mode to update up to 25 assets, using Azure Purview Studio. - Curated Code using the Atlas API.-- Use Bulk Edit Mode when assigning terms manually. This feature allows a curator to assign glossary terms, stewards, experts, and classifications in bulk based on selected items from a search result. Multiple searches can be chained by selecting objects in the results. The Bulk Edit will apply to all selected objects. Be sure to clear the selections after the bulk edit has been performed. -- Other bulk edit operations can be performed by using the Atlas API. An example would be using the API to add descriptions or other custom properties to assets in bulk programmatically
+- Use Bulk Edit Mode when assigning terms manually. This feature allows a curator to assign glossary terms, owners, experts, classifications and certified in bulk based on selected items from a search result. Multiple searches can be chained by selecting objects in the results. The Bulk Edit will apply to all selected objects. Be sure to clear the selections after the bulk edit has been performed.
+- Other bulk edit operations can be performed by using the Atlas API. An example would be using the API to add descriptions or other custom properties to assets in bulk programmatically.
## Next steps - [Create, import, and export glossary terms](./how-to-create-import-export-glossary.md)
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policy-authoring-generic.md
Previously updated : 2/22/2022- Last updated : 3/02/2022
-# Authoring and publishing data owner access policies (preview)
-This tutorial describes how a data owner can create, update and publish access policies in Azure Purview.
+# Authoring and publishing data owner access policies (Preview)
++
+Access policies allow data owners to manage access to datasets from Azure Purview. Data owners can author policies directly from Azure Purview Studio, and then have those policies enforced by the data source.
+
+This tutorial describes how a data owner can create, update, and publish access policies in Azure Purview Studio.
## Prerequisites
-The following actions are needed before authoring access policies in Azure Purview:
-1. Configure permissions in the data source and in Azure Purview
-1. Register the data source in Azure Purview for Data Use Governance
-These tutorials list the pre-requisites of supported data sources
-- [Azure Storage](./tutorial-data-owner-policies-storage.md#configuration)-- [Resource Groups and Subscriptions](./tutorial-data-owner-policies-resource-group.md#configuration)
+### Required permissions
+
+>[!IMPORTANT]
+> - Currently, policy operations are only supported at **root collection level** and not child collection level.
+
+These permissions are required in Azure Purview at root collection level:
+- *Policy authors* role can create or edit policies.
+- *Data source administrator* role can publish a policy.
+
+For more information, see the guide on [managing Azure Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users).
+
+### Data source configuration
+
+Before authoring data policies in Azure Purview Studio, you'll need to configure the data sources so that they can enforce those policies.
+
+1. Follow any policy-specific prerequisites for your source. Check the [Azure Purview supported data sources table](azure-purview-connector-overview.md#azure-purview-data-sources) and select the link in the **Access Policy** column for sources where access policies are available. Follow any steps listed in the Access policy or Prerequisites sections.
+1. Register the data source in Azure Purview. Follow the **Prerequisites** and **Register** sections of the [source pages](azure-purview-connector-overview.md) for your resources.
+1. [Enable the data use governance toggle on the data source](how-to-enable-data-use-governance.md#enable-data-use-governance). Additional permissions for this step are described in the linked document.
## Create a new policy This section describes the steps to create a new policy in Azure Purview.
-1. Sign in to Azure Purview Studio.
+1. Sign in to the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**. 1. Select the **New Policy** button in the policy page.
- ![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to create policies.](./media/access-policies-common/policy-onboard-guide-1.png)
+ :::image type="content" source="./media/access-policies-common/policy-onboard-guide-1.png" alt-text="Data owner can access the Policy functionality in Azure Purview when it wants to create policies.":::
1. The new policy page will appear. Enter the policy **Name** and **Description**. 1. To add policy statements to the new policy, select the **New policy statement** button. This will bring up the policy statement builder.
- ![Image shows how a data owner can create a new policy statement.](./media/access-policies-common/create-new-policy.png)
+ :::image type="content" source="./media/access-policies-common/create-new-policy.png" alt-text="Data owner can create a new policy statement.":::
1. Select the **Effect** button and choose *Allow* from the drop-down list.
This section describes the steps to create a new policy in Azure Purview.
- To create a broad policy statement that covers an entire data source, resource group, or subscription that was previously registered, use the **Data sources** box and select its **Type**. - To create a fine-grained policy, use the **Assets** box instead. Enter the **Data Source Type** and the **Name** of a previously registered and scanned data source. See example in the image.
- ![Image shows how a data owner can select a Data Resource when editing a policy statement.](./media/access-policies-common/select-data-source-type.png)
+ :::image type="content" source="./media/access-policies-common/select-data-source-type.png" alt-text="Data owner can select a Data Resource when editing a policy statement.":::
-1. Select the **Continue** button and transverse the hierarchy to select and underlying data-object (e.g. folder, file, etc). Select **Recursive** to apply the policy from that point in the hierarchy down to any child data-objects. Then select the **Add** button. This will take you back to the policy editor.
+1. Select the **Continue** button and transverse the hierarchy to select and underlying data-object (for example: folder, file, etc.). Select **Recursive** to apply the policy from that point in the hierarchy down to any child data-objects. Then select the **Add** button. This will take you back to the policy editor.
- ![Image shows how a data owner can select the asset when creating or editing a policy statement.](./media/access-policies-common/select-asset.png)
+ :::image type="content" source="./media/access-policies-common/select-asset.png" alt-text="Data owner can select the asset when creating or editing a policy statement.":::
1. Select the **Subjects** button and enter the subject identity as a principal, group, or MSI. Then select the **OK** button. This will take you back to the policy editor
- ![Image shows how a data owner can select the subject when creating or editing a policy statement.](./media/access-policies-common/select-subject.png)
+ :::image type="content" source="./media/access-policies-common/select-subject.png" alt-text="Data owner can select the subject when creating or editing a policy statement.":::
1. Repeat the steps #5 to #11 to enter any more policy statements.
-1. Select the **Save** button to save the policy
+1. Select the **Save** button to save the policy.
-## Update or delete a policy
+Now that you have created your policy, you will need to publish it for it to become active.
-Steps to create a new policy in Azure Purview are as follows.
+## Publish a policy
-1. Sign in to Azure Purview Studio.
+A newly created policy is in the **draft** state. The process of publishing associates the new policy with one or more data sources under governance. This is called "binding" a policy to a data source.
-1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
+The steps to publish a policy are as follows:
- ![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to update a policy.](./media/access-policies-common/policy-onboard-guide-2.png)
+1. Sign in to the [Azure Purview Studio](https://web.purview.azure.com/resource/).
-1. The Policy portal will present the list of existing policies in Azure Purview. Select the policy that needs to be updated.
+1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
-1. The policy details page will appear, including Edit and Delete options. Select the **Edit** button, which brings up the policy statement builder. Now, any parts of the statements in this policy can be updated. To delete the policy, use the **Delete** button.
+ :::image type="content" source="./media/access-policies-common/policy-onboard-guide-2.png" alt-text="Data owner can access the Policy functionality in Azure Purview when it wants to update a policy by selecting 'Data policies'.":::
- ![Image shows how a data owner can edit or delete a policy statement.](./media/access-policies-common/edit-policy.png)
+1. The Policy portal will present the list of existing policies in Azure Purview. Locate the policy that needs to be published. Select the **Publish** button on the right top corner of the page.
-## Publish the policy
+ :::image type="content" source="./media/access-policies-common/publish-policy.png" alt-text="Data owner can publish a policy.":::
-A newly created policy is in the draft state. The process of publishing associates the new policy with one or more data sources under governance. This is called "binding" a policy to a data source.
+1. A list of data sources is displayed. You can enter a name to filter the list. Then, select each data source where this policy is to be published and then select the **Publish** button.
-The steps to publish a policy are as follows
+ :::image type="content" source="./media/access-policies-common/select-data-sources-publish-policy.png" alt-text="Data owner can select the data source where the policy will be published.":::
-1. Sign in to Azure Purview Studio.
+>[!Note]
+> After making changes to a policy, there is no need to publish it again for it to take effect if the data source(s) continues to be the same.
-1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
+## Update or delete a policy
- ![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to publish a policy.](./media/access-policies-common/policy-onboard-guide-2.png)
+Steps to update or delete a policy in Azure Purview are as follows.
-1. The Policy portal will present the list of existing policies in Azure Purview. Locate the policy that needs to be published. Select the **Publish** button on the right top corner of the page.
+1. Sign in to the [Azure Purview Studio](https://web.purview.azure.com/resource/).
- ![Image shows how a data owner can publish a policy.](./media/access-policies-common/publish-policy.png)
+1. Navigate to the **Data policy** feature using the left side panel. Then select **Data policies**.
-1. A list of data sources is displayed. You can enter a name to filter the list. Then, select each data source where this policy is to be published and then select the **Publish** button.
+ :::image type="content" source="./media/access-policies-common/policy-onboard-guide-2.png" alt-text="Data owner can access the Policy functionality in Azure Purview when it wants to update a policy.":::
- ![Image shows how a data owner can select the data source where the policy will be published.](./media/access-policies-common/select-data-sources-publish-policy.png)
+1. The Policy portal will present the list of existing policies in Azure Purview. Select the policy that needs to be updated.
->[!Note]
-> - After making changes to a policy, there is no need to publish it again for it to take effect if the data source(s) continues to be the same.
+1. The policy details page will appear, including Edit and Delete options. Select the **Edit** button, which brings up the policy statement builder. Now, any parts of the statements in this policy can be updated. To delete the policy, use the **Delete** button.
+
+ :::image type="content" source="./media/access-policies-common/edit-policy.png" alt-text="Data owner can edit or delete a policy statement.":::
## Next steps
-Check blog, demo and related tutorials
-* [What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
-* [Demo of data owner access policies for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
-* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
-* [Enable Azure Purview data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
+For specific guides on creating policies, you can follow these tutorials:
+
+- [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
+- [Enable Azure Purview data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
purview How To Enable Data Use Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-governance.md
+
+ Title: Enabling data use governance on your Azure Purview sources
+description: Step-by-step guide on how to enable data use access for your registered sources.
+++++ Last updated : 3/02/2022+++
+# Enable data use governance on your Azure Purview sources
++
+*Data use governance* (DUG) is an option in the data source registration in Azure Purview. Its purpose is to make those data sources available in the policy authoring experience of Azure Purview Studio. In other words, access policies can only be written on data sources that have been previously registered and with DUG toggle set to enable.
+
+## Prerequisites
+
+To enable the *Data use Governance* (DUG) toggle for a data source, resource group, or subscription, the same user needs to have both certain IAM privileges on the resource and certain Azure Purview privileges.
+
+1) User needs to have **either one of the following** IAM role combinations on the resource:
+- IAM *Owner*
+- Both IAM *Contributor* + IAM *User Access Administrator*
+
+Follow this [guide to configure Azure RBAC role permissions](../role-based-access-control/check-access.md).
+
+2) In addition, the same user needs to have Azure Purview Data source administrator role at the root collection level. See the guide on [managing Azure Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users).
+
+>[!IMPORTANT]
+> - Currently, policy operations are only supported at **root collection level** and not child collection level.
+
+## Enable Data use governance
+
+To enable *Data use governance* for a resource, the resource will first need to be registered in Azure Purview.
+To register a resource, follow the **Prerequisites** and **Register** sections of the [source pages](azure-purview-connector-overview.md) for your resources.
+
+Once you have your resource registered, follow the rest of the steps to enable an individual resource for *Data use governance*.
+
+1. Go to the [Azure Purview Studio](https://web.purview.azure.com/resource/).
+
+1. Select the **Data map** tab in the left menu.
+
+1. Select the **Sources** tab in the left menu.
+
+1. Select the source where you want to enable *Data use governance*.
+
+1. At the top of the source page, select **Edit source**.
+
+1. Set the *Data use governance* toggle to **Enabled**, as shown in the image below.
+
+ :::image type="content" source="./media/tutorial-data-owner-policies-storage/register-data-source-for-policy-storage.png" alt-text="Set Data use governance toggle to **Enabled** at the bottom of the menu.":::
+
+> [!WARNING]
+> **Known issues**
+> - Moving data sources to a different resource group or subscription is not yet supported. If want to do that, de-register the data source in Azure Purview before moving it and then register it again after that happens.
++
+## Disable Data use governance
+
+>[!Note]
+>If your resource is currently a part of any active access policy, you will not be able to disable data use governance. First [un-publish the policy from the resource](how-to-data-owner-policy-authoring-generic.md#update-or-delete-a-policy), then disable data use governance.
+
+To disable data use governance for a source, resource group, or subscription, a user needs to either be a resource IAM **Owner** or an Azure Purview **Data source admin**. Once you have those permissions follow these steps:
+
+1. Go to the [Azure Purview Studio](https://web.purview.azure.com/resource/).
+
+1. Select the **Data map** tab in the left menu.
+
+1. Select the **Sources** tab in the left menu.
+
+1. Select the source you want to disable data use governance for.
+
+1. At the top of the source page, select **Edit source**.
+
+1. Set the **Data use governance** toggle to **Disabled**.
+
+>[!NOTE]
+> Disabling **Data use governance** for a subscription source will disable it also for all assets registered in that subscription.
+
+> [!WARNING]
+> **Known issues**
+> - Once a subscription gets disabled for *Data use governance* any underlying assets that are enabled for *Data use governance* will be disabled, which is the right behavior. However, policy statements based on those assets will still be allowed after that
+
+## Data use governance best practices
+
+- We highly encourage registering data sources for *Data use governance* and managing all associated access policies in a single Azure Purview account.
+- Should you have multiple Azure Purview accounts, be aware that **all** data sources belonging to a subscription must be registered for *Data use governance* in a single Azure Purview account. That Azure Purview account can be in any subscription in the tenant. The *Data use governance* toggle will become greyed out when there are invalid configurations. Some examples of valid and invalid configurations follow in the diagram below:
+ - **Case 1** shows a valid configuration where a Storage account is registered in an Azure Purview account in the same subscription.
+ - **Case 2** shows a valid configuration where a Storage account is registered in an Azure Purview account in a different subscription.
+ - **Case 3** shows an invalid configuration arising because Storage accounts S3SA1 and S3SA2 both belong to Subscription 3, but are registered to different Azure Purview accounts. In that case, the *Data use governance* toggle will only work in the Azure Purview account that wins and registers a data source in that subscription first. The toggle will then be greyed out for the other data source.
+
+ :::image type="content" source="./media/access-policies-common/valid-and-invalid-configurations.png" alt-text="Diagram shows valid and invalid configurations when using multiple Azure Purview accounts to manage policies.":::
+
+## Next steps
+
+- [Create data owner policies for your resources](how-to-data-owner-policy-authoring-generic.md)
+- [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
+- [Enable Azure Purview data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Azure Purview automates data discovery by providing data scanning and classifica
|App |Description | |-|--| |[Data Map](#data-map) | Makes your data meaningful by graphing your data assets, and their relationships, across your data estate. The data map used to discover data and manage access to that data. |
-|[Data Catalog](#data-catalog) | Find trusted data sources by browsing and searching your data assets. The data catalog aligns your assets with friendly business terms and data classification to identify data sources. |
+|[Data Catalog](#data-catalog) | Finds trusted data sources by browsing and searching your data assets. The data catalog aligns your assets with friendly business terms and data classification to identify data sources. |
|[Data Insights](#data-insights) | Gives you an overview of your data estate to help you discover what kinds of data you have and where. | ## Data Map
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
To create an access policy for Azure Data Lake Storage Gen 2, follow the guideli
[!INCLUDE [Azure Storage specific pre-requisites](./includes/access-policies-prerequisites-storage.md)]
+### Enable data use governance
+
+Data use governance is an option on your Azure Purview sources that will allow you to manage access for that source from within Azure Purview.
+To enable data use governance, follow [the data use governance guide](how-to-enable-data-use-governance.md#enable-data-use-governance).
+ ### Create an access policy Now that youΓÇÖve prepared your storage account and environment for access policies, you can follow one of these configuration guides to create your policies: -- [Single storage account](./tutorial-data-owner-policies-storage.md) - This guide will allow you to enable access policies on a single storage account in your subscription.-- [All sources in a subscription or resource group](./tutorial-data-owner-policies-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
+* [Single storage account](./tutorial-data-owner-policies-storage.md) - This guide will allow you to enable access policies on a single storage account in your subscription.
+* [All sources in a subscription or resource group](./tutorial-data-owner-policies-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
+
+Or you can follow the [generic guide for creating data access policies](how-to-data-owner-policy-authoring-generic.md).
## Next steps
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
This article outlines the process to register an Azure Blob Storage account in A
|||||||| | [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| [Yes](#access-policy) | Limited** |
-\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
+\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
For file types such as csv, tsv, psv, ssv, the schema is extracted when the following logics are in place:
Scans can be managed or run again on completion
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-manage-scan-options.png" alt-text="manage scan options":::
-1. You can _run an incremental scan_ or a _full scan_ again
+1. You can _run an incremental scan_ or a _full scan_ again.
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-full-inc-scan.png" alt-text="full or incremental scan":::
To create an access policy for an Azure Storage account, follow the guidelines b
[!INCLUDE [Azure Storage specific pre-requisites](./includes/access-policies-prerequisites-storage.md)]
+### Enable data use governance
+
+Data use governance is an option on your Azure Purview sources that will allow you to manage access for that source from within Azure Purview.
+To enable data use governance, follow [the data use governance guide](how-to-enable-data-use-governance.md#enable-data-use-governance).
+ ### Create an access policy Now that youΓÇÖve prepared your storage account and environment for access policies, you can follow one of these configuration guides to create your policies: -- [Single storage account](./tutorial-data-owner-policies-storage.md) - This guide will allow you to enable access policies on a single storage account in your subscription.-- [All sources in a subscription or resource group](./tutorial-data-owner-policies-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
+* [Single storage account](./tutorial-data-owner-policies-storage.md) - This guide will allow you to enable access policies on a single storage account in your subscription.
+* [All sources in a subscription or resource group](./tutorial-data-owner-policies-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
+
+Or you can follow the [generic guide for creating data access policies](how-to-data-owner-policy-authoring-generic.md).
## Next steps Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data. -- [Data insights in Azure Purview](concept-insights.md)-- [Lineage in Azure Purview](catalog-lineage-user-guide.md)-- [Search Data Catalog](how-to-search-catalog.md)
+* [Data insights in Azure Purview](concept-insights.md)
+* [Lineage in Azure Purview](catalog-lineage-user-guide.md)
+* [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
This article outlines how to register multiple Azure sources and how to authenti
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| No| [Source Dependant](catalog-lineage-user-guide.md)|
+| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| [Yes](tutorial-data-owner-policies-resource-group.md) | [Source Dependant](catalog-lineage-user-guide.md)|
## Prerequisites
This section describes how to register multiple Azure sources in Azure Purview u
### Prerequisites for registration
-You need to set up some authentication to be able to enumerate resources under a subscription or resource group.
+Azure Purview needs permissions to be able to list resources under a subscription or resource group.
-1. Go to the subscription or the resource group in the Azure portal.
-1. Select **Access Control (IAM)** from the left menu.
-1. Select **+Add**.
-1. In the **Select input** box, select the **Reader** role and enter your Azure Purview account name (which represents its MSI file name).
-1. Select **Save** to finish the role assignment.
### Authentication for registration
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| - [Notebook integration with Azure Synapse](../../sentinel/notebooks-with-synapse.md) | Public Preview | Not Available| | **Watchlists** | | | |- [Watchlists](../../sentinel/watchlists.md) | GA | GA |
+|- [Large watchlists from Azure Storage](../../sentinel/watchlists.md) | Public Preview | Not Available |
+|- [Watchlist templates](../../sentinel/watchlists.md) | Public Preview | Not Available |
| **Hunting** | | | | - [Hunting](../../sentinel/hunting.md) | GA | GA | | **Content and content management** | | |
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dns-normalization-schema.md
The following filtering parameters are available:
| **starttime** | datetime | Filter only DNS queries that ran at or after this time. | | **endtime** | datetime | Filter only DNS queries that finished running at or before this time. | | **srcipaddr** | string | Filter only DNS queries from this source IP address. |
-| **domain_has_any**| dynamic | Filter only DNS queries where the `domain` (or `query`) has any of the listed domain names, including as part of the event domain.
+| **domain_has_any**| dynamic | Filter only DNS queries where the `domain` (or `query`) has any of the listed domain names, including as part of the event domain. The length of the list is limited to 10,000 items.
| **responsecodename** | string | Filter only DNS queries for which the response code name matches the provided value. <br>For example: `NXDOMAIN` | | **response_has_ipv4** | string | Filter only DNS queries in which the response field includes the provided IP address or IP address prefix. Use this parameter when you want to filter on a single IP address or prefix. <br><br>Results aren't returned for sources that don't provide a response.|
-| **response_has_any_prefix** | dynamic| Filter only DNS queries in which the response field includes any of the listed IP addresses or IP address prefixes. Prefixes should end with a `.`, for example: `10.0.`. <br><br>Use this parameter when you want to filter on a list of IP addresses or prefixes. <br><br>Results aren't returned for sources that don't provide a response. |
+| **response_has_any_prefix** | dynamic| Filter only DNS queries in which the response field includes any of the listed IP addresses or IP address prefixes. Prefixes should end with a `.`, for example: `10.0.`. <br><br>Use this parameter when you want to filter on a list of IP addresses or prefixes. <br><br>Results aren't returned for sources that don't provide a response. The length of the list is limited to 10,000 items. |
| **eventtype**| string | Filter only DNS queries of the specified type. If no value is specified, only lookup queries are returned. | | | | |
The fields listed in this section are specific to DNS events, although many are
| <a name=responsename></a>**DnsResponseName** | Optional | String | The content of the response, as included in the record.<br> <br> The DNS response data is inconsistent across reporting devices, is complex to parse, and has less value for source-agnostic analytics. Therefore the information model doesn't require parsing and normalization, and Microsoft Sentinel uses an auxiliary function to provide response information. For more information, see [Handling DNS response](#handling-dns-response).| | <a name=responsecodename></a>**DnsResponseCodeName** | Mandatory | Enumerated | The [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Note**: IANA doesn't define the case for the values, so analytics must normalize the case. If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br><br> If this record represents a request and not a response, set to **NA**. <br><br>Example: `NXDOMAIN` | | **DnsResponseCode** | Optional | Integer | The [DNS numerical response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `3`|
-| **TransactionIdHex** | Recommended | String | The DNS unique hex transaction ID. |
+| <a name="transactionidhex"></a>**TransactionIdHex** | Recommended | String | The DNS query unique ID as assigned by the DNS client, in hexadecimal format. Note that this value is part of the DNS protocol and different from [DnsSessionId](#dnssessionid), the network layer session ID, typically assigned by the reporting device. |
| **NetworkProtocol** | Optional | Enumerated | The transport protocol used by the network resolution event. The value can be **UDP** or **TCP**, and is most commonly set to **UDP** for DNS. <br><br>Example: `UDP`| | **DnsQueryClass** | Optional | Integer | The [DNS class ID](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, and therefore this field is less valuable.| | **DnsQueryClassName** | Optional | String | The [DNS class name](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, and therefore this field is less valuable.<br><br>Example: `IN`|
The fields listed in this section are specific to DNS events, although many are
| **DnsFlagsRecursionDesired** | Optional | Boolean | The DNS `RD` flag indicates in a request that that client would like the server to use recursive queries. | | **DnsFlagsTruncates** | Optional | Boolean | The DNS `TC` flag indicates that a response was truncates as it exceeded the maximum response size. | | **DnsFlagsZ** | Optional | Boolean | The DNS `Z` flag is a deprecated DNS flag, which might be reported by older DNS systems. |
-|<a name="dnssessionid"></a>**DnsSessionId** | Optional | string | The DNS session identifier as reported by the reporting device. <br><br>Example: `EB4BFA28-2EAD-4EF7-BC8A-51DF4FDF5B55` |
+|<a name="dnssessionid"></a>**DnsSessionId** | Optional | string | The DNS session identifier as reported by the reporting device. Note that this value is different from [TransactionIdHex](#transactionidhex), the DNS query unique ID as assigned by the DNS client.<br><br>Example: `EB4BFA28-2EAD-4EF7-BC8A-51DF4FDF5B55` |
| **SessionId** | Alias | String | Alias to [DnsSessionId](#dnssessionid) | | | | | |
sentinel Hunting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/hunting.md
For more information, see [Use bookmarks in hunting](bookmarks.md).
When your hunting and investigations become more complex, use Microsoft Sentinel notebooks to enhance your activity with machine learning, visualizations, and data analysis.
-Notebooks provide a kind of virtual sandbox, complete with it own kernel, where you can carry out a complete investigation. Your notebook can include the raw data, the code you run on that data, the results, and their visualizations. Save your notebooks so that you can share it with others to reuse in your organization.
+Notebooks provide a kind of virtual sandbox, complete with its own kernel, where you can carry out a complete investigation. Your notebook can include the raw data, the code you run on that data, the results, and their visualizations. Save your notebooks so that you can share it with others to reuse in your organization.
Notebooks may be helpful when your hunting or investigation becomes too large to remember easily, view details, or when you need to save queries and results. To help you create and share notebooks, Microsoft Sentinel provides [Jupyter Notebooks](https://jupyter.org), an open-source, interactive development and data manipulation environment, integrated directly in the Microsoft Sentinel **Notebooks** page.
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
The following filtering parameters are available:
|-|--|-| | **starttime** | datetime | Filter only network sessions that *started* at or after this time. | | **endtime** | datetime | Filter only network sessions that *started* running at or before this time. |
-| **srcipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [source IP address field](#srcipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. |
-| **dstipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [destination IP address field](#dstipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. |
+| **srcipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [source IP address field](#srcipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.|
+| **dstipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [destination IP address field](#dstipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.|
| **dstportnum** | Int | Filter only network sessions with the specified destination port number. | | **hostname_has_any** | dynamic | Filter only network sessions for which the [destination hostname field](#dsthostname) has any of the values listed. | | **dvcaction** | dynamic | Filter only network sessions for which the [Device Action field](#dvcaction) is any of the values listed. |
sentinel Watchlist Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlist-schemas.md
Last updated 11/09/2021
-# Microsoft Sentinel built-in watchlist template schemas (Public preview)
+# Microsoft Sentinel built-in watchlist template schemas (preview)
-This article details the schemas used in each built-in watchlist template provided by Microsoft Sentinel. For more information, see [Create a new watchlist by using a template (Public preview)](watchlists-create.md#create-a-watchlist-by-using-a-template-public-preview).
+This article details the schemas used in each built-in watchlist template provided by Microsoft Sentinel. For more information, see [Create watchlists in Microsoft Sentinel](watchlists-create.md).
The Microsoft Sentinel watchlist templates are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
sentinel Watchlists Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists-create.md
description: Create watchlist in Microsoft Sentinel for allowlists or blocklist
Previously updated : 1/04/2022 Last updated : 02/22/2022 # Create watchlists in Microsoft Sentinel Watchlists in Microsoft Sentinel allow you to correlate data from a data source you provide with the events in your Microsoft Sentinel environment. For example, you might create a watchlist with a list of high value assets, terminated employees, or service accounts in your environment.
-Create a watchlist from a local file or by using a template.
+Upload a watchlist file from a local folder or from your Azure Storage account. To create a watchlist file, you have the option to download one of the watchlist templates from Microsoft Sentinel to populate with your data. Then upload that file when you create the watchlist in Microsoft Sentinel.
-File uploads are currently limited to files of up to 3.8 MB in size. Before you create a watchlist, review the [limitations of watchlists](watchlists.md).
+Local file uploads are currently limited to files of up to 3.8 MB in size. If you have large watchlist file that's up to 500 MB in size, upload the file to your Azure Storage account. Before you create a watchlist, review the [limitations of watchlists](watchlists.md).
-## Create a watchlist from a local file
+> [!IMPORTANT]
+> The features for watchlist templates and the ability to create a watchlist from a file in Azure Storage are currently in **PREVIEW**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
-You can upload a CSV file from your local machine to create a watchlist.
+## Upload a watchlist from a local folder
+
+You have two ways to upload a CSV file from your local machine to create a watchlist.
+
+- For a watchlist file you created without a watchlist template: Select **Add new** and enter the required information.
+- For a watchlist file created from a template downloaded from Microsoft Sentinel: Go to the watchlist **Templates (Preview)** tab. Select the option **Create from template**. Azure pre-populates the name, description, and watchlist alias for you.
+
+### Upload watchlist from a file you created
+
+If you didn't use a watchlist template to create your file,
1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace. 1. Under **Configuration**, select **Watchlist**. 1. Select **+ Add new**.
- :::image type="content" source="./media/watchlists/sentinel-watchlist-new.png" alt-text="new watchlist" lightbox="./media/watchlists/sentinel-watchlist-new.png":::
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-new.png" alt-text="Screenshot of add watchlist option on watchlist page." lightbox="./media/watchlists-create/sentinel-watchlist-new.png":::
1. On the **General** page, provide the name, description, and alias for the watchlist.
- :::image type="content" source="./media/watchlists/sentinel-watchlist-general.png" alt-text="watchlist general page":::
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-general-country.png" alt-text="Screenshot of watchlist general tab in the watchlists wizard.":::
1. Select **Next: Source**. 1. Use the information in the following table to upload your watchlist data.
You can upload a CSV file from your local machine to create a watchlist.
1. Select **Next: Review and Create**.
- :::image type="content" source="./media/watchlists/sentinel-watchlist-source.png" alt-text="watchlist source page" lightbox="./media/watchlists/sentinel-watchlist-source.png":::
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-source.png" alt-text="Screenshot of the watchlist source tab." lightbox="./media/watchlists-create/sentinel-watchlist-source.png":::
1. Review the information, verify that it's correct, wait for the **Validation passed** message, and then select **Create**.
- :::image type="content" source="./media/watchlists/sentinel-watchlist-review.png" alt-text="watchlist review page":::
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-review.png" alt-text="Screenshot of the watchlist review page.":::
A notification appears once the watchlist is created.
- :::image type="content" source="./media/watchlists/sentinel-watchlist-complete.png" alt-text="watchlist successful creation notification" lightbox="./media/watchlists/sentinel-watchlist-complete.png":::
+It might take several minutes for the watchlist to be created and the new data to be available in queries.
+
+### Upload watchlist created from a template (Preview)
+
+To create the watchlist from a template you populated,
+
+1. From appropriate workspace in Microsoft Sentinel, select **Watchlist**.
+1. Select the tab **Templates (Preview)**.
+1. Select the appropriate template from the list to view details of the template in the right pane.
+1. Select **Create from template**.
+
+ :::image type="content" source="./media/watchlists-create/create-watchlist-from-template.png" alt-text="Screenshot of the option to create a watchlist from a built-in template." lightbox="./media/watchlists-create/create-watchlist-from-template.png":::
+
+1. On the **General** tab, notice that the **Name**, **Description**, and **Watchlist Alias** fields are all read-only.
+1. On the **Source** tab, select **Browse for files** and select the file you created from the template.
+1. Select **Next: Review and Create** > **Create**.
+1. Watch for an Azure notification to appear when the watchlist is created.
+
+It might take several minutes for the watchlist to be created and the new data to be available in queries.
+
+## Create a large watchlist from file in Azure Storage (preview)
+
+If you have a large watchlist up to 500 MB in size, upload your watchlist file to your Azure Storage account. Then create a shared access signature URL for Microsoft Sentinel to retrieve the watchlist data. A shared access signature URL is an URI that contains both the resource URI and shared access signature token of a resource like a csv file in your storage account. Finally, add the watchlist to your workspace in Microsoft Sentinel.
+
+For more information about shared access signatures, see [Azure Storage shared access signature token](../storage/common/storage-sas-overview.md#sas-token).
+
+### Step 1: Upload a watchlist file to Azure Storage
+
+To upload a large watchlist file to your Azure Storage account, use AzCopy or the Azure portal.
+
+1. If you donΓÇÖt already have an Azure Storage account, [create a storage account](../storage/common/storage-account-create.md). The storage account can be in a different resource group or region from your workspace in Microsoft Sentinel.
+1. Use either AzCopy or the Azure portal to upload your csv file with your watchlist data into the storage account.
+
+#### Upload your file with AzCopy
+
+Upload files and directories to Blob storage by using the AzCopy v10 command-line utility. To learn more, see [Upload files to Azure Blob storage by using AzCopy](../storage/common/storage-use-azcopy-blobs-upload.md).
+
+1. If you donΓÇÖt already have a storage container, create one by running the following command.
+
+ ```azcopy
+ azcopy make
+ https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>
+ ```
+
+2. Next, run the following command to upload the file.
+
+ ```azcopy
+ azcopy copy '<local-file-path>' 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<blob-name>'
+ ```
+
+#### Upload your file in Azure portal
+
+If you don't use AzCopy, upload your file by using the Azure portal. Go to your storage account in Azure portal to upload the csv file with your watchlist data.
+
+1. If you donΓÇÖt already have an existing storage container, [create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). For the level of public access to the container, we recommend the default which is that the level is set to Private (no anonymous access).
+1. Upload your csv file to the storage account by [uploading a block blob](../storage/blobs/storage-quickstart-blobs-portal.md#upload-a-block-blob).
+
+### Step 2: Create shared access signature URL
+
+Create a shared access signature URL for Microsoft Sentinel to retrieve the watchlist data.
+
+1. Follow the steps in [Create SAS tokens for blobs in the Azure portal](../cognitive-services/translator/document-translation/create-sas-tokens.md?tabs=blobs#create-sas-tokens-for-blobs-in-the-azure-portal).
+1. Set the shared access signature token expiry time to be at minimum 6 hours.
+1. Copy the value for **Blob SAS URL**.
+
+### Step 3: Add the watchlist to a workspace
+
+1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
+1. Under **Configuration**, select **Watchlist**.
+1. Select **+ Add new**.
+
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-new.png" alt-text="Screenshot of the add watchlist on the watchlist page." lightbox="./media/watchlists-create/sentinel-watchlist-new.png":::
+
+1. On the **General** page, provide the name, description, and alias for the watchlist.
+
+ :::image type="content" source="./media/watchlists-create/sentinel-watchlist-general.png" alt-text="Screenshot of the watchlist general tab with name, description, and watchlist alias fields.":::
+
+1. Select **Next: Source**.
+1. Use the information in the following table to upload your watchlist data.
+
+ |Field |Description |
+ |||
+ |Source type | Azure Storage (preview) |
+ |Select a type for the dataset | CSV file with a header (.csv) |
+ |Number of lines before row with headings | Enter the number of lines before the header row that's in your data file. |
+ |Blob SAS URL (Preview) | Paste in the shared access URL you created. |
+ |SearchKey | Enter the name of a column in your watchlist that you expect to use as a join with other data or a frequent object of searches. For example, if your server watchlist contains country names and their respective two-letter country codes, and you expect to use the country codes often for search or joins, use the **Code** column as the SearchKey. |
+
+ After you enter all the information, your page will look similar to following image.
+
+ :::image type="content" source="./media/watchlists-create/watchlist-source-azure-storage.png" alt-text="Screenshot of the watchlist source page with sample values entered." lightbox="./media/watchlists-create/watchlist-source-azure-storage.png":::
+
+1. Select **Next: Review and Create**.
+1. Review the information, verify that it's correct, wait for the **Validation passed** message.
+1. Select **Create**.
+
+It might take a while for a large watchlist to be created and the new data to be available in queries.
-## Create a watchlist by using a template (public preview)
+## View watchlist status
+
+View the status by selecting the watchlist in your workspace.
+
+1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
+1. Under **Configuration**, select **Watchlist**.
+1. On the **My Watchlists** tab, select the watchlist.
+1. On the details page, review the **Status (Preview)**.
+
+ :::image type="content" source="./media/watchlists-create/view-status-uploading.png" alt-text="Screenshot that shows the upload status on the watchlist." lightbox="./media/watchlists-create/view-status-uploading.png":::
+
+1. When the status is **Succeeded**, select **View in Log Analytics** to use the watchlist in a query. It might take several minutes for the watchlist to show in Log Analytics.
+
+ :::image type="content" source="media/watchlists-create/large-watchlist-status-view-in-log.png" alt-text="Screenshot of ":::
+
+## Download watchlist template (preview)
Download one of the watchlist templates from Microsoft Sentinel to populate with your data. Then upload that file when you create the watchlist in Microsoft Sentinel. Each built-in watchlist template has its own set of data listed in the CSV file attached to the template. For more information, see [Built-in watchlist schemas](watchlist-schemas.md).
-The ability to create a watchlist by using a template is currently in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- To download one of the watchlist templates, 1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
To download one of the watchlist templates,
1. Select the ellipses **...** at the end of the row. 1. Select **Download Schema**.
- :::image type="content" source="./media/watchlists/create-watchlist-download-schema.png" alt-text="Screenshot of templates tab with download schema selected.":::
+ :::image type="content" source="./media/watchlists-create/create-watchlist-download-schema.png" alt-text="Screenshot of templates tab with download schema selected.":::
1. Populate your local version of the file and save it locally as a CSV file.-
-To create the watchlist from the template you populated,
-
-1. From appropriate workspace in Microsoft Sentinel, select **Watchlist**.
-1. Select the tab **Templates (Preview)**.
-1. Select the appropriate template from the list to view details of the template in the right pane.
-1. Select **Create from template**,
-
- :::image type="content" source="./media/watchlists/create-watchlist-from-template.png" alt-text="Create a watchlist from a built-in template." lightbox="./media/watchlists/create-watchlist-from-template.png":::
-
-1. On the **General** tab, notice that the **Name**, **Description**, and **Watchlist Alias** fields are all read-only.
-1. On the **Source** tab, select **Browse for files** and select the file you created from the template.
-1. Select **Next: Review and Create** > **Create**.
+1. Follow the steps to [upload watchlist created from a template (Preview)](#upload-watchlist-created-from-a-template-preview).
## Deleted and recreated watchlists in Log Analytics view
If you delete and recreate a watchlist, you might see both the deleted and recre
## Next steps To learn more about Microsoft Sentinel, see the following articles:-- Learn how to [get visibility into your data and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).
+- Learn how to [get visibility into your data and potential threats](get-visibility.md)
+- Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md)
- [Use workbooks](monitor-your-data.md) to monitor your data. - [Manage watchlists](watchlists-manage.md) - [Build queries and detection rules with watchlists](watchlists-queries.md)
sentinel Watchlists Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists-queries.md
To use a watchlist in search query, write a Kusto query that uses the _GetWatchl
1. Select the watchlist you want to use. 1. Select **View in Log Analytics**.
- :::image type="content" source="./media/watchlists/sentinel-watchlist-queries-list.png" alt-text="use watchlists in queries" lightbox="./media/watchlists/sentinel-watchlist-queries-list.png" :::
+ :::image type="content" source="./media/watchlists-queries/sentinel-watchlist-queries-list.png" alt-text="Screenshot that shows how to use watchlists in queries." lightbox="./media/watchlists-queries/sentinel-watchlist-queries-list.png" :::
1. Review the **Results** tab. The items in your watchlist are automatically extracted for your query. The example below shows the results of the extraction of the **Name** and **IP Address** fields. The **SearchKey** is shown as its own column.
- :::image type="content" source="./media/watchlists/sentinel-watchlist-queries-fields.png" alt-text="queries with watchlist fields" lightbox="./media/watchlists/sentinel-watchlist-queries-fields.png":::
+ :::image type="content" source="./media/watchlists-queries/sentinel-watchlist-queries-fields.png" alt-text="Screenshot that shows queries with watchlist fields." lightbox="./media/watchlists-queries/sentinel-watchlist-queries-fields.png":::
The timestamp on your queries will be ignored in both the query UI and in scheduled alerts.
To use a watchlist in search query, write a Kusto query that uses the _GetWatchl
The following image shows the results of this example query in Log Analytics.
- :::image type="content" source="./media/watchlists/sentinel-watchlist-queries-join.png" alt-text="Screenshot of queries against watchlist as lookup" lightbox="./media/watchlists/sentinel-watchlist-queries-join.png":::
+ :::image type="content" source="./media/watchlists-queries/sentinel-watchlist-queries-join.png" alt-text="Screenshot of queries against watchlist as lookup." lightbox="./media/watchlists-queries/sentinel-watchlist-queries-join.png":::
## Create an analytics rule with a watchlist
To use watchlists in analytics rules, create a rule using the _GetWatchlist('wat
|172.20.32.117,Work | The CSV file looks something like the following image.
- :::image type="content" source="./media/watchlists/create-watchlist.png" alt-text="Screenshot of four items in a CSV file that's used for the watchlist.":::
+ :::image type="content" source="./media/watchlists-queries/create-watchlist.png" alt-text="Screenshot of four items in a CSV file that's used for the watchlist.":::
To use the `_GetWatchlist` function for this example, your query would be `_GetWatchlist('ipwatchlist')`.
- :::image type="content" source="./media/watchlists/sentinel-watchlist-new-other.png" alt-text="Screenshot that shows the query returns the four items from the watchlist.":::
+ :::image type="content" source="./media/watchlists-queries/sentinel-watchlist-new-other.png" alt-text="Screenshot that shows the query returns the four items from the watchlist.":::
In this example, we only include events from IP addresses in the watchlist:
To use watchlists in analytics rules, create a rule using the _GetWatchlist('wat
The following image shows this last query used in the rule query.
- :::image type="content" source="./media/watchlists/sentinel-watchlist-analytics-rule.png" alt-text="use watchlists in analytics rules":::
+ :::image type="content" source="./media/watchlists-queries/sentinel-watchlist-analytics-rule.png" alt-text="Screenshot that shows how to use watchlists in analytics rules.":::
1. Complete the rest of the tabs in the **Analytics rule wizard**.
You might need to see a list of watchlist aliases to identify a watchlist to use
1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace. 1. Under **General**, select **Logs**.
-1. If you see a list of queries, closes the **Queries** window.
+1. If you see a list of queries, close the **Queries** window.
1. On the **New Query** page, run the following query: `_GetWatchlistAlias`. 1. Review the list of aliases in the **Results** tab.
- :::image type="content" source="./media/watchlists/sentinel-watchlist-alias.png" alt-text="list watchlists" lightbox="./media/watchlists/sentinel-watchlist-alias.png":::
+ :::image type="content" source="./media/watchlists-queries/sentinel-watchlist-alias.png" alt-text="Screenshot that shows a list of watchlists." lightbox="./media/watchlists-queries/sentinel-watchlist-alias.png":::
## Next steps
sentinel Watchlists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlists.md
Previously updated : 1/04/2022 Last updated : 02/07/2022 # Use watchlists in Microsoft Sentinel Watchlists in Microsoft Sentinel allow you to correlate data from a data source you provide with the events in your Microsoft Sentinel environment. For example, you might create a watchlist with a list of high-value assets, terminated employees, or service accounts in your environment.
-Use watchlists in your search, detection rules, threat hunting, and response playbooks.
+Use watchlists in your search, detection rules, threat hunting, and response playbooks.
Watchlists are stored in your Microsoft Sentinel workspace as name-value pairs and are cached for optimal query performance and low latency.
+> [!IMPORTANT]
+> The features for watchlist templates and the ability to create a watchlist from a file in Azure Storage are currently in **PREVIEW**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+ ## When to use watchlists Use watchlists to help you with following scenarios:
Before you create a watchlist, be aware of the following limitations:
- The use of watchlists should be limited to reference data, as they aren't designed for large data volumes. - The **total number of active watchlist items** across all watchlists in a single workspace is currently limited to **10 million**. Deleted watchlist items don't count against this total. If you require the ability to reference large data volumes, consider ingesting them using [custom logs](../azure-monitor/agents/data-sources-custom-logs.md) instead. - Watchlists can only be referenced from within the same workspace. Cross-workspace and/or Lighthouse scenarios are currently not supported.-- File uploads are currently limited to files of up to 3.8 MB in size.
+- Local file uploads are currently limited to files of up to 3.8 MB in size.
+- File uploads from an Azure Storage account (in preview) are currently limited to files up to 500 MB in size.
## Options to create watchlists
-You can create a watchlist from a local file you created or by using a template (in public preview).
+Create a watchlist in Microsoft Sentinel from a file you upload from a local folder or from a file in your Azure Storage account.
+
+You have the option to download one of the watchlist templates from Microsoft Sentinel to populate with your data. Then upload that file when you create the watchlist in Microsoft Sentinel.
-To create a watchlist from a template, download the watchlist templates from Microsoft Sentinel and populate it with your data. Then upload that file when you create the watchlist in Microsoft Sentinel.
+To create a watchlist from a large file that's up to 500 MB in size, upload the file to your Azure Storage account. Then create a shared access signature URL for Microsoft Sentinel to retrieve the watchlist data. A shared access signature URL is an URI that contains both the resource URI and shared access signature token of a resource like a csv file in your storage account. Finally, add the watchlist to your workspace in Microsoft Sentinel.
For more information, see the following articles: - [Create watchlists in Microsoft Sentinel](watchlists-create.md) - [Built-in watchlist schemas](watchlist-schemas.md)
+- [Azure Storage SAS token](../storage/common/storage-sas-overview.md#sas-token)
## Watchlists in queries for searches and detection rules
sentinel Web Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/web-normalization-schema.md
The following filtering parameters are available:
|-|--|-| | **starttime** | datetime | Filter only Web sessions that **started** at or after this time. | | **endtime** | datetime | Filter only Web sessions that **started** running at or before this time. |
-| **srcipaddr_has_any_prefix** | dynamic | Filter only Web sessions for which the [source IP address field](network-normalization-schema.md#srcipaddr) prefix is in one of the listed values. Note that the list of values can include IP addresses as well as IP address prefixes. Prefixes should end with a `.`, for example: `10.0.`. |
-| **url_has_any** | dynamic | Filter only Web sessions for which the [URL field](#url) has any of the values listed. If specified, and the session is not a web session, no result will be returned.|
-| **httpuseragent_has_any** | dynamic | Filter only web sessions for which the [user agent field](#httpuseragent) has any of the values listed. If specified, and the session is not a web session, no result will be returned. |
+| **srcipaddr_has_any_prefix** | dynamic | Filter only Web sessions for which the [source IP address field](network-normalization-schema.md#srcipaddr) prefix is in one of the listed values. Note that the list of values can include IP addresses as well as IP address prefixes. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.|
+| **url_has_any** | dynamic | Filter only Web sessions for which the [URL field](#url) has any of the values listed. If specified, and the session is not a web session, no result will be returned. The length of the list is limited to 10,000 items.|
+| **httpuseragent_has_any** | dynamic | Filter only web sessions for which the [user agent field](#httpuseragent) has any of the values listed. If specified, and the session is not a web session, no result will be returned. The length of the list is limited to 10,000 items. |
| **ventresultdetails_in** | dynamic | Filter only web sessions for which the HTTP status code, stored in the [EventResultDetails](#eventresultdetails) field, is any of the values listed. | | **eventresult** | string | Filter only network sessions with a specific **EventResult** value. | | | | |
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
description: This article describes new features in Microsoft Sentinel from the
Previously updated : 01/31/2022 Last updated : 03/01/2022
If you're looking for items older than six months, you'll find them in the [Arch
> > You can also contribute! Join us in the [Microsoft Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
+## March 2022
+
+- [Create a large watchlist from file in Azure Storage (public preview)](#create-a-large-watchlist-from-file-in-azure-storage-public-preview)
+
+### Create a large watchlist from file in Azure Storage (public preview)
+
+Create a watchlist from a large file that's up to 500 MB in size by uploading the file to your Azure Storage account. When you add the watchlist to your workspace, you provide a shared access signature URL. Microsoft Sentinel uses the shared access signature URL to retrieve the watchlist data from Azure Storage.
+
+For more information, see:
+
+- [Use watchlists in Microsoft Sentinel](watchlists.md)
+- [Create watchlists in Microsoft Sentinel](watchlists-create.md)
+ ## February 2022 - [View MITRE support coverage (Public preview)](#view-mitre-support-coverage-public-preview) - [View Azure Purview data in Microsoft Sentinel](#view-azure-purview-data-in-microsoft-sentinel-public-preview) - [Manually run playbooks based on the incident trigger (Public preview)](#manually-run-playbooks-based-on-the-incident-trigger-public-preview)
+- [Search across long time spans in large datasets (public preview)](#search-across-long-time-spans-in-large-datasets-public-preview)
+- [Restore archived logs from search (public preview)](#restore-archived-logs-from-search-public-preview)
### View MITRE support coverage (Public preview)
Watchlist templates currently include:
- **High Value Assets**. A list of devices, resources, or other assets that have critical value in the organization. - **Network Mapping**. A list of IP subnets and their respective organizational contexts.
-For more information, see [Create a new watchlist using a template](watchlists-create.md#create-a-watchlist-by-using-a-template-public-preview) and [Built-in watchlist schemas](watchlist-schemas.md).
+For more information, see [Create watchlists in Microsoft Sentinel](watchlists-create.md) and [Built-in watchlist schemas](watchlist-schemas.md).
service-bus-messaging Service Bus Dotnet Get Started With Queues Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues-legacy.md
- Title: Use Azure Service Bus queues with .NET (old version)
-description: In this article, you create .NET Core console applications to send messages to and receive messages from a Service Bus queue.
- Previously updated : 07/27/2021---
-# Send and receive messages from Azure Service Bus queues using .NET (old package)
-In this article, you create .NET Core console applications to send messages to and receive messages from a Service Bus queue.
-
-> [!WARNING]
-> This article uses the old Microsoft.Azure.ServiceBus package. For an article that uses the latest Azure.Messaging.ServiceBus package, see [Send and receive events using Azure.Messaging.ServiceBus package](service-bus-dotnet-get-started-with-queues.md).
-
-## Prerequisites
--- [Visual Studio 2019](https://www.visualstudio.com/vs).-- [NET Core SDK](https://dotnet.microsoft.com/download), version 2.0 or later.-- An Azure subscription. To complete steps in this this article, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).-- If you don't have a queue to work with, follow steps in the [Use Azure portal to create a Service Bus queue](service-bus-quickstart-portal.md) article to create a queue.-
- - Read the quick overview of Service Bus queues.
- - Create a Service Bus namespace.
- - Get the connection string.
- - Create a Service Bus queue.
-
-## Send messages
-
-To send messages to the queue, write a C# console application using Visual Studio.
-
-### Create a console application
-
-Launch Visual Studio and create a new **Console App (.NET Core)** project for C#. This example names the app *CoreSenderApp*.
-
-### Add the Service Bus NuGet package
-
-1. Right-click the newly created project and select **Manage NuGet Packages**.
-1. Select **Browse**. Search for and select **[Microsoft.Azure.ServiceBus](https://www.nuget.org/packages/Microsoft.Azure.ServiceBus/)**.
-1. Select **Install** to complete the installation, then close the NuGet Package Manager.
-
- ![Select a NuGet package][nuget-pkg]
-
-### Write code to send messages to the queue
-
-1. In *Program.cs*, add the following `using` statements at the top of the namespace definition, before the class declaration:
-
- ```csharp
- using System.Text;
- using System.Threading;
- using System.Threading.Tasks;
- using Microsoft.Azure.ServiceBus;
- ```
-
-1. In the `Program` class, declare the following variables:
-
- ```csharp
- const string ServiceBusConnectionString = "<your_connection_string>";
- const string QueueName = "<your_queue_name>";
- static IQueueClient queueClient;
- ```
-
- Enter your connection string for the namespace as the `ServiceBusConnectionString` variable. Enter your queue name.
-
-1. Replace the `Main()` method with the following **async** `Main` method. It calls the `SendMessagesAsync()` method that you will add in the next step to send messages to the queue.
-
- ```csharp
- public static async Task Main(string[] args)
- {
- const int numberOfMessages = 10;
- queueClient = new QueueClient(ServiceBusConnectionString, QueueName);
-
- Console.WriteLine("======================================================");
- Console.WriteLine("Press ENTER key to exit after sending all the messages.");
- Console.WriteLine("======================================================");
-
- // Send messages.
- await SendMessagesAsync(numberOfMessages);
-
- Console.ReadKey();
-
- await queueClient.CloseAsync();
- }
- ```
-1. Directly after the `MainAsync()` method, add the following `SendMessagesAsync()` method that does the work of sending the number of messages specified by `numberOfMessagesToSend` (currently set to 10):
-
- ```csharp
- static async Task SendMessagesAsync(int numberOfMessagesToSend)
- {
- try
- {
- for (var i = 0; i < numberOfMessagesToSend; i++)
- {
- // Create a new message to send to the queue.
- string messageBody = $"Message {i}";
- var message = new Message(Encoding.UTF8.GetBytes(messageBody));
-
- // Write the body of the message to the console.
- Console.WriteLine($"Sending message: {messageBody}");
-
- // Send the message to the queue.
- await queueClient.SendAsync(message);
- }
- }
- catch (Exception exception)
- {
- Console.WriteLine($"{DateTime.Now} :: Exception: {exception.Message}");
- }
- }
- ```
-
-Here is what your *Program.cs* file should look like.
-
-```csharp
-namespace CoreSenderApp
-{
- using System;
- using System.Text;
- using System.Threading;
- using System.Threading.Tasks;
- using Microsoft.Azure.ServiceBus;
-
- class Program
- {
- // Connection String for the namespace can be obtained from the Azure portal under the
- // 'Shared Access policies' section.
- const string ServiceBusConnectionString = "<your_connection_string>";
- const string QueueName = "<your_queue_name>";
- static IQueueClient queueClient;
-
- public static async Task Main(string[] args)
- {
- const int numberOfMessages = 10;
- queueClient = new QueueClient(ServiceBusConnectionString, QueueName);
-
- Console.WriteLine("======================================================");
- Console.WriteLine("Press ENTER key to exit after sending all the messages.");
- Console.WriteLine("======================================================");
-
- // Send messages.
- await SendMessagesAsync(numberOfMessages);
-
- Console.ReadKey();
-
- await queueClient.CloseAsync();
- }
-
- static async Task SendMessagesAsync(int numberOfMessagesToSend)
- {
- try
- {
- for (var i = 0; i < numberOfMessagesToSend; i++)
- {
- // Create a new message to send to the queue
- string messageBody = $"Message {i}";
- var message = new Message(Encoding.UTF8.GetBytes(messageBody));
-
- // Write the body of the message to the console
- Console.WriteLine($"Sending message: {messageBody}");
-
- // Send the message to the queue
- await queueClient.SendAsync(message);
- }
- }
- catch (Exception exception)
- {
- Console.WriteLine($"{DateTime.Now} :: Exception: {exception.Message}");
- }
- }
- }
-}
-```
-
-Run the program and check the Azure portal.
-
-Select the name of your queue in the namespace **Overview** window to display queue **Essentials**.
-
-![Messages received with count and size][queue-message]
-
-The **Active message count** value for the queue is now **10**. Each time you run this sender app without retrieving the messages, this value increases by 10.
-
-The current size of the queue increments the **CURRENT** value in **Essentials** each time the app adds messages to the queue.
-
-The next section describes how to retrieve these messages.
-
-## Receive messages
-
-To receive the messages you sent, create another **Console App (.NET Core)** application. Install the **Microsoft.Azure.ServiceBus** NuGet package, as you did for the sender application.
-
-### Write code to receive messages from the queue
-
-1. In *Program.cs*, add the following `using` statements at the top of the namespace definition, before the class declaration:
-
- ```csharp
- using System;
- using System.Text;
- using System.Threading;
- using System.Threading.Tasks;
- using Microsoft.Azure.ServiceBus;
- ```
-
-1. In the `Program` class, declare the following variables:
-
- ```csharp
- const string ServiceBusConnectionString = "<your_connection_string>";
- const string QueueName = "<your_queue_name>";
- static IQueueClient queueClient;
- ```
-
- Enter your connection string for the namespace as the `ServiceBusConnectionString` variable. Enter your queue name.
-
-1. Replace the `Main()` method with the following code:
-
- ```csharp
- static void Main(string[] args)
- {
- MainAsync().GetAwaiter().GetResult();
- }
-
- static async Task MainAsync()
- {
- queueClient = new QueueClient(ServiceBusConnectionString, QueueName);
-
- Console.WriteLine("======================================================");
- Console.WriteLine("Press ENTER key to exit after receiving all the messages.");
- Console.WriteLine("======================================================");
-
- // Register QueueClient's MessageHandler and receive messages in a loop
- RegisterOnMessageHandlerAndReceiveMessages();
-
- Console.ReadKey();
-
- await queueClient.CloseAsync();
- }
- ```
-
-1. Directly after the `MainAsync()` method, add the following method, which registers the message handler and receives the messages sent by the sender application:
-
- ```csharp
- static void RegisterOnMessageHandlerAndReceiveMessages()
- {
- // Configure the message handler options in terms of exception handling, number of concurrent messages to deliver, etc.
- var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler)
- {
- // Maximum number of concurrent calls to the callback ProcessMessagesAsync(), set to 1 for simplicity.
- // Set it according to how many messages the application wants to process in parallel.
- MaxConcurrentCalls = 1,
-
- // Indicates whether the message pump should automatically complete the messages after returning from user callback.
- // False below indicates the complete operation is handled by the user callback as in ProcessMessagesAsync().
- AutoComplete = false
- };
-
- // Register the function that processes messages.
- queueClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions);
- }
- ```
-
-1. Directly after the previous method, add the following `ProcessMessagesAsync()` method to process the received messages:
-
- ```csharp
- static async Task ProcessMessagesAsync(Message message, CancellationToken token)
- {
- // Process the message.
- Console.WriteLine($"Received message: SequenceNumber:{message.SystemProperties.SequenceNumber} Body:{Encoding.UTF8.GetString(message.Body)}");
-
- // Complete the message so that it is not received again.
- // This can be done only if the queue Client is created in ReceiveMode.PeekLock mode (which is the default).
- await queueClient.CompleteAsync(message.SystemProperties.LockToken);
-
- // Note: Use the cancellationToken passed as necessary to determine if the queueClient has already been closed.
- // If queueClient has already been closed, you can choose to not call CompleteAsync() or AbandonAsync() etc.
- // to avoid unnecessary exceptions.
- }
- ```
-
-1. Finally, add the following method to handle any exceptions that might occur:
-
- ```csharp
- // Use this handler to examine the exceptions received on the message pump.
- static Task ExceptionReceivedHandler(ExceptionReceivedEventArgs exceptionReceivedEventArgs)
- {
- Console.WriteLine($"Message handler encountered an exception {exceptionReceivedEventArgs.Exception}.");
- var context = exceptionReceivedEventArgs.ExceptionReceivedContext;
- Console.WriteLine("Exception context for troubleshooting:");
- Console.WriteLine($"- Endpoint: {context.Endpoint}");
- Console.WriteLine($"- Entity Path: {context.EntityPath}");
- Console.WriteLine($"- Executing Action: {context.Action}");
- return Task.CompletedTask;
- }
- ```
-
-Here is what your *Program.cs* file should look like:
-
-```csharp
-namespace CoreReceiverApp
-{
- using System;
- using System.Text;
- using System.Threading;
- using System.Threading.Tasks;
- using Microsoft.Azure.ServiceBus;
-
- class Program
- {
- // Connection String for the namespace can be obtained from the Azure portal under the
- // 'Shared Access policies' section.
- const string ServiceBusConnectionString = "<your_connection_string>";
- const string QueueName = "<your_queue_name>";
- static IQueueClient queueClient;
-
- static void Main(string[] args)
- {
- MainAsync().GetAwaiter().GetResult();
- }
-
- static async Task MainAsync()
- {
- queueClient = new QueueClient(ServiceBusConnectionString, QueueName);
-
- Console.WriteLine("======================================================");
- Console.WriteLine("Press ENTER key to exit after receiving all the messages.");
- Console.WriteLine("======================================================");
-
- // Register QueueClient's MessageHandler and receive messages in a loop
- RegisterOnMessageHandlerAndReceiveMessages();
-
- Console.ReadKey();
-
- await queueClient.CloseAsync();
- }
-
- static void RegisterOnMessageHandlerAndReceiveMessages()
- {
- // Configure the MessageHandler Options in terms of exception handling, number of concurrent messages to deliver etc.
- var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler)
- {
- // Maximum number of Concurrent calls to the callback `ProcessMessagesAsync`, set to 1 for simplicity.
- // Set it according to how many messages the application wants to process in parallel.
- MaxConcurrentCalls = 1,
-
- // Indicates whether MessagePump should automatically complete the messages after returning from User Callback.
- // False below indicates the Complete will be handled by the User Callback as in `ProcessMessagesAsync` below.
- AutoComplete = false
- };
-
- // Register the function that will process messages
- queueClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions);
- }
-
- static async Task ProcessMessagesAsync(Message message, CancellationToken token)
- {
- // Process the message
- Console.WriteLine($"Received message: SequenceNumber:{message.SystemProperties.SequenceNumber} Body:{Encoding.UTF8.GetString(message.Body)}");
-
- // Complete the message so that it is not received again.
- // This can be done only if the queueClient is created in ReceiveMode.PeekLock mode (which is default).
- await queueClient.CompleteAsync(message.SystemProperties.LockToken);
-
- // Note: Use the cancellationToken passed as necessary to determine if the queueClient has already been closed.
- // If queueClient has already been Closed, you may chose to not call CompleteAsync() or AbandonAsync() etc. calls
- // to avoid unnecessary exceptions.
- }
-
- static Task ExceptionReceivedHandler(ExceptionReceivedEventArgs exceptionReceivedEventArgs)
- {
- Console.WriteLine($"Message handler encountered an exception {exceptionReceivedEventArgs.Exception}.");
- var context = exceptionReceivedEventArgs.ExceptionReceivedContext;
- Console.WriteLine("Exception context for troubleshooting:");
- Console.WriteLine($"- Endpoint: {context.Endpoint}");
- Console.WriteLine($"- Entity Path: {context.EntityPath}");
- Console.WriteLine($"- Executing Action: {context.Action}");
- return Task.CompletedTask;
- }
- }
-}
-```
-
-Run the program, and check the portal again. The **Active message count** and **CURRENT** values are now **0**.
-
-![Queue after messages have been received][queue-message-receive]
-
-Congratulations! You've now created a queue, sent a set of messages to that queue, and received those messages from the same queue.
-
-> [!NOTE]
-> You can manage Service Bus resources with [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/). The Service Bus Explorer allows users to easily connect to a Service Bus namespace and administer messaging entities. The tool provides advanced features like import/export functionality or the ability to test topics, queues, subscriptions, relay services, notification hubs, and event hubs.
-
-## Next steps
-
-Check out our [GitHub repository with samples](https://github.com/Azure/azure-service-bus/tree/master/samples) that demonstrate some of the more advanced features of Service Bus messaging.
-
-<!--Image references-->
-
-[nuget-pkg]: ./media/service-bus-dotnet-get-started-with-queues-legacy/nuget-package.png
-[queue-message]: ./media/service-bus-dotnet-get-started-with-queues-legacy/messages-sent-to-essentials.png
-[queue-message-receive]: ./media/service-bus-dotnet-get-started-with-queues-legacy/queue-message-receive-in-essentials.png
-
service-bus-messaging Service Bus Dotnet How To Use Topics Subscriptions Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions-legacy.md
- Title: Use Azure Service Bus topics and subscriptions with .NET (old version)
-description: Write a C# .NET Core console application that uses Service Bus messaging topics and subscriptions.
- Previously updated : 07/27/2021---
-# Use Service Bus topics and subscriptions with .NET (old package)
-This article covers the following steps:
-
-1. Write a .NET Core console application to send a set of messages to the topic.
-2. Write a .NET Core console application to receive those messages from the subscription.
-
-> [!WARNING]
-> This article uses the old Microsoft.Azure.ServiceBus package. For an article that uses the latest Azure.Messaging.ServiceBus package, see [Send and receive messages using the Azure.Messaging.ServiceBus package](service-bus-dotnet-how-to-use-topics-subscriptions.md). To move your application from using the old library to new one, see the [Guide to migrate from Microsoft.Azure.ServiceBus to Azure.Messaging.ServiceBus](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/servicebus/Azure.Messaging.ServiceBus/MigrationGuide.md).
-
-## Prerequisites
-
-1. An Azure subscription. To complete steps in this article, you need an Azure account. You can activate your [Visual Studio or MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A85619ABF) or sign-up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).
-2. Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscriptions to the topic](service-bus-quickstart-topics-subscriptions-portal.md) to do the following tasks:
- 1. Create a Service Bus **namespace**.
- 2. Get the **connection string**.
- 3. Create a **topic** in the namespace.
- 4. Create **one subscription** to the topic in the namespace.
-3. [Visual Studio 2017 Update 3 (version 15.3, 26730.01)](https://www.visualstudio.com/vs) or later.
-4. [NET Core SDK](https://dotnet.microsoft.com/download), version 2.0 or later.
-
-## Send messages to the topic
-
-To send messages to the topic, write a C# console application using Visual Studio.
-
-### Create a console application
-
-Launch Visual Studio and create a new **Console App (.NET Core)** project.
-
-### Add the Service Bus NuGet package
-
-1. Right-click the newly created project and select **Manage NuGet Packages**.
-2. Click the **Browse** tab, search for **[Microsoft.Azure.ServiceBus](https://www.nuget.org/packages/Microsoft.Azure.ServiceBus/)**, and then select the **Microsoft.Azure.ServiceBus** item. Click **Install** to complete the installation, then close this dialog box.
-
- ![Select a NuGet package][nuget-pkg]
-
-### Write code to send messages to the topic
-
-1. In Program.cs, add the following `using` statements at the top of the namespace definition, before the class declaration:
-
- ```csharp
- using System.Text;
- using System.Threading;
- using System.Threading.Tasks;
- using Microsoft.Azure.ServiceBus;
- ```
-
-2. Within the `Program` class, declare the following variables. Set the `ServiceBusConnectionString` variable to the connection string that you obtained when creating the namespace, and set `TopicName` to the name that you used when creating the topic:
-
- ```csharp
- const string ServiceBusConnectionString = "<your_connection_string>";
- const string TopicName = "<your_topic_name>";
- static ITopicClient topicClient;
- ```
-
-3. Replace the `Main()` method with the following **async** `Main` method that sends messages asynchronously using the SendMessagesAsync method that you will add in the next step.
-
- ```csharp
- public static async Task Main(string[] args)
- {
- const int numberOfMessages = 10;
- topicClient = new TopicClient(ServiceBusConnectionString, TopicName);
-
- Console.WriteLine("======================================================");
- Console.WriteLine("Press ENTER key to exit after sending all the messages.");
- Console.WriteLine("======================================================");
-
- // Send messages.
- await SendMessagesAsync(numberOfMessages);
-
- Console.ReadKey();
-
- await topicClient.CloseAsync();
- }
- ```
-5. Directly after the `Main` method, add the following `SendMessagesAsync()` method that performs the work of sending the number of messages specified by `numberOfMessagesToSend` (currently set to 10):
-
- ```csharp
- static async Task SendMessagesAsync(int numberOfMessagesToSend)
- {
- try
- {
- for (var i = 0; i < numberOfMessagesToSend; i++)
- {
- // Create a new message to send to the topic.
- string messageBody = $"Message {i}";
- var message = new Message(Encoding.UTF8.GetBytes(messageBody));
-
- // Write the body of the message to the console.
- Console.WriteLine($"Sending message: {messageBody}");
-
- // Send the message to the topic.
- await topicClient.SendAsync(message);
- }
- }
- catch (Exception exception)
- {
- Console.WriteLine($"{DateTime.Now} :: Exception: {exception.Message}");
- }
- }
- ```
-
-6. Here is what your sender Program.cs file should look like.
-
- ```csharp
- namespace CoreSenderApp
- {
- using System;
- using System.Text;
- using System.Threading;
- using System.Threading.Tasks;
- using Microsoft.Azure.ServiceBus;
-
- class Program
- {
- const string ServiceBusConnectionString = "<your_connection_string>";
- const string TopicName = "<your_topic_name>";
- static ITopicClient topicClient;
-
- public static async Task Main(string[] args)
- {
- const int numberOfMessages = 10;
- topicClient = new TopicClient(ServiceBusConnectionString, TopicName);
-
- Console.WriteLine("======================================================");
- Console.WriteLine("Press ENTER key to exit after sending all the messages.");
- Console.WriteLine("======================================================");
-
- // Send messages.
- await SendMessagesAsync(numberOfMessages);
-
- Console.ReadKey();
-
- await topicClient.CloseAsync();
- }
-
- static async Task SendMessagesAsync(int numberOfMessagesToSend)
- {
- try
- {
- for (var i = 0; i < numberOfMessagesToSend; i++)
- {
- // Create a new message to send to the topic
- string messageBody = $"Message {i}";
- var message = new Message(Encoding.UTF8.GetBytes(messageBody));
-
- // Write the body of the message to the console
- Console.WriteLine($"Sending message: {messageBody}");
-
- // Send the message to the topic
- await topicClient.SendAsync(message);
- }
- }
- catch (Exception exception)
- {
- Console.WriteLine($"{DateTime.Now} :: Exception: {exception.Message}");
- }
- }
- }
- }
- ```
-
-3. Run the program, and check the Azure portal: click the name of your topic in the namespace **Overview** window. The topic **Essentials** screen is displayed. In the subscription listed near the bottom of the window, notice that the **Message Count** value for the subscription is now **10**. Each time you run the sender application without retrieving the messages (as described in the next section), this value increases by 10. Also note that the current size of the topic increments the **Current** value in the **Essentials** window each time the app adds messages to the topic.
-
- ![Message size][topic-message]
-
-## Receive messages from the subscription
-
-To receive the messages you sent, create another .NET Core console application and install the **Microsoft.Azure.ServiceBus** NuGet package, similar to the previous sender application.
-
-### Write code to receive messages from the subscription
-
-1. In Program.cs, add the following `using` statements at the top of the namespace definition, before the class declaration:
-
- ```csharp
- using System.Text;
- using System.Threading;
- using System.Threading.Tasks;
- using Microsoft.Azure.ServiceBus;
- ```
-
-2. Within the `Program` class, declare the following variables. Set the `ServiceBusConnectionString` variable to the connection string that you obtained when creating the namespace, set `TopicName` to the name that you used when creating the topic, and set `SubscriptionName` to the name that you used when creating the subscription to the topic:
-
- ```csharp
- const string ServiceBusConnectionString = "<your_connection_string>";
- const string TopicName = "<your_topic_name>";
- const string SubscriptionName = "<your_subscription_name>";
- static ISubscriptionClient subscriptionClient;
- ```
-
-3. Replace the `Main()` method with the following **async** `Main` method. It calls the `RegisterOnMessageHandlerAndReceiveMessages()` method that you will add in the next step.
-
- ```csharp
- public static async Task Main(string[] args)
- {
- subscriptionClient = new SubscriptionClient(ServiceBusConnectionString, TopicName, SubscriptionName);
-
- Console.WriteLine("======================================================");
- Console.WriteLine("Press ENTER key to exit after receiving all the messages.");
- Console.WriteLine("======================================================");
-
- // Register subscription message handler and receive messages in a loop
- RegisterOnMessageHandlerAndReceiveMessages();
-
- Console.ReadKey();
-
- await subscriptionClient.CloseAsync();
- }
- ```
-5. Directly after the `Main()` method, add the following method that registers the message handler and receives the messages sent by the sender application:
-
- ```csharp
- static void RegisterOnMessageHandlerAndReceiveMessages()
- {
- // Configure the message handler options in terms of exception handling, number of concurrent messages to deliver, etc.
- var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler)
- {
- // Maximum number of concurrent calls to the callback ProcessMessagesAsync(), set to 1 for simplicity.
- // Set it according to how many messages the application wants to process in parallel.
- MaxConcurrentCalls = 1,
-
- // Indicates whether the message pump should automatically complete the messages after returning from user callback.
- // False below indicates the complete operation is handled by the user callback as in ProcessMessagesAsync().
- AutoComplete = false
- };
-
- // Register the function that processes messages.
- subscriptionClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions);
- }
- ```
-
-6. Directly after the previous method, add the following `ProcessMessagesAsync()` method to process the received messages:
-
- ```csharp
- static async Task ProcessMessagesAsync(Message message, CancellationToken token)
- {
- // Process the message.
- Console.WriteLine($"Received message: SequenceNumber:{message.SystemProperties.SequenceNumber} Body:{Encoding.UTF8.GetString(message.Body)}");
-
- // Complete the message so that it is not received again.
- // This can be done only if the subscriptionClient is created in ReceiveMode.PeekLock mode (which is the default).
- await subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
-
- // Note: Use the cancellationToken passed as necessary to determine if the subscriptionClient has already been closed.
- // If subscriptionClient has already been closed, you can choose to not call CompleteAsync() or AbandonAsync() etc.
- // to avoid unnecessary exceptions.
- }
- ```
-
-7. Finally, add the following method to handle any exceptions that might occur:
-
- ```csharp
- // Use this handler to examine the exceptions received on the message pump.
- static Task ExceptionReceivedHandler(ExceptionReceivedEventArgs exceptionReceivedEventArgs)
- {
- Console.WriteLine($"Message handler encountered an exception {exceptionReceivedEventArgs.Exception}.");
- var context = exceptionReceivedEventArgs.ExceptionReceivedContext;
- Console.WriteLine("Exception context for troubleshooting:");
- Console.WriteLine($"- Endpoint: {context.Endpoint}");
- Console.WriteLine($"- Entity Path: {context.EntityPath}");
- Console.WriteLine($"- Executing Action: {context.Action}");
- return Task.CompletedTask;
- }
- ```
-
-8. Here is what your receiver Program.cs file should look like:
-
- ```csharp
- namespace CoreReceiverApp
- {
- using System;
- using System.Text;
- using System.Threading;
- using System.Threading.Tasks;
- using Microsoft.Azure.ServiceBus;
-
- class Program
- {
- const string ServiceBusConnectionString = "<your_connection_string>";
- const string TopicName = "<your_topic_name>";
- const string SubscriptionName = "<your_subscription_name>";
- static ISubscriptionClient subscriptionClient;
-
- public static async Task Main(string[] args)
- {
- subscriptionClient = new SubscriptionClient(ServiceBusConnectionString, TopicName, SubscriptionName);
-
- Console.WriteLine("======================================================");
- Console.WriteLine("Press ENTER key to exit after receiving all the messages.");
- Console.WriteLine("======================================================");
-
- // Register subscription message handler and receive messages in a loop
- RegisterOnMessageHandlerAndReceiveMessages();
-
- Console.ReadKey();
-
- await subscriptionClient.CloseAsync();
- }
-
- static void RegisterOnMessageHandlerAndReceiveMessages()
- {
- // Configure the message handler options in terms of exception handling, number of concurrent messages to deliver, etc.
- var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler)
- {
- // Maximum number of concurrent calls to the callback ProcessMessagesAsync(), set to 1 for simplicity.
- // Set it according to how many messages the application wants to process in parallel.
- MaxConcurrentCalls = 1,
-
- // Indicates whether MessagePump should automatically complete the messages after returning from User Callback.
- // False below indicates the Complete will be handled by the User Callback as in `ProcessMessagesAsync` below.
- AutoComplete = false
- };
-
- // Register the function that processes messages.
- subscriptionClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions);
- }
-
- static async Task ProcessMessagesAsync(Message message, CancellationToken token)
- {
- // Process the message.
- Console.WriteLine($"Received message: SequenceNumber:{message.SystemProperties.SequenceNumber} Body:{Encoding.UTF8.GetString(message.Body)}");
-
- // Complete the message so that it is not received again.
- // This can be done only if the subscriptionClient is created in ReceiveMode.PeekLock mode (which is the default).
- await subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
-
- // Note: Use the cancellationToken passed as necessary to determine if the subscriptionClient has already been closed.
- // If subscriptionClient has already been closed, you can choose to not call CompleteAsync() or AbandonAsync() etc.
- // to avoid unnecessary exceptions.
- }
-
- static Task ExceptionReceivedHandler(ExceptionReceivedEventArgs exceptionReceivedEventArgs)
- {
- Console.WriteLine($"Message handler encountered an exception {exceptionReceivedEventArgs.Exception}.");
- var context = exceptionReceivedEventArgs.ExceptionReceivedContext;
- Console.WriteLine("Exception context for troubleshooting:");
- Console.WriteLine($"- Endpoint: {context.Endpoint}");
- Console.WriteLine($"- Entity Path: {context.EntityPath}");
- Console.WriteLine($"- Executing Action: {context.Action}");
- return Task.CompletedTask;
- }
- }
- }
- ```
-9. Run the program, and check the portal again. Notice that the **Message Count** and **Current** values are now **0**.
-
- ![Topic length][topic-message-receive]
-
-Congratulations! Using the .NET Standard library, you have now created a topic and subscription, sent 10 messages, and received those messages.
-
-> [!NOTE]
-> You can manage Service Bus resources with [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/). The Service Bus Explorer allows users to connect to a Service Bus namespace and administer messaging entities in an easy manner. The tool provides advanced features like import/export functionality or the ability to test topic, queues, subscriptions, relay services, notification hubs and events hubs.
-
-## Next steps
-
-Check out the Service Bus [GitHub repository with samples](https://github.com/Azure/azure-service-bus/tree/master/samples) that demonstrate some of the more advanced features of Service Bus messaging.
-
-<!--Image references-->
-
-[nuget-pkg]: ./media/service-bus-dotnet-how-to-use-topics-subscriptions/nuget-package.png
-[topic-message]: ./media/service-bus-dotnet-how-to-use-topics-subscriptions/topic-message.png
-[topic-message-receive]: ./media/service-bus-dotnet-how-to-use-topics-subscriptions/topic-message-receive.png
-[createtopic1]: ./media/service-bus-dotnet-how-to-use-topics-subscriptions/create-topic1.png
-[createtopic2]: ./media/service-bus-dotnet-how-to-use-topics-subscriptions/create-topic2.png
-[createtopic3]: ./media/service-bus-dotnet-how-to-use-topics-subscriptions/create-topic3.png
-[createtopic4]: ./media/service-bus-dotnet-how-to-use-topics-subscriptions/create-topic4.png
-[github-samples]: https://github.com/Azure-Samples/azure-servicebus-messaging-samples
-[azure-portal]: https://portal.azure.com
service-bus-messaging Service Bus Java How To Use Queues Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-queues-legacy.md
- Title: Use Azure Service Bus queues with Java (old version)
-description: In this article, you learn how to create Java applications to send messages to and receive messages from an Azure Service Bus queue.
Previously updated : 07/27/2021----
-# Use Azure Service Bus queues with Java to send and receive messages (old package)
-
-In this article, you learn how to create Java applications to send messages to and receive messages from an Azure Service Bus queue.
-
-> [!WARNING]
-> This article uses the old `azure-servicebus` packages. For an article that uses the latest `azure-messaging-servicebus` package, see [Send and receive messages using `azure-messaging-servicebus`](service-bus-java-how-to-use-queues.md).
--
-## Prerequisites
-1. An Azure subscription. To complete steps in this article, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).
-2. If you don't have a queue to work with, follow steps in the [Use Azure portal to create a Service Bus queue](service-bus-quickstart-portal.md) article to create a queue.
- 1. Read the quick **overview** of Service Bus **queues**.
- 2. Create a Service Bus **namespace**.
- 3. Get the **connection string**.
- 4. Create a Service Bus **queue**.
-3. Install [Azure SDK for Java][Azure SDK for Java].
--
-## Configure your application to use Service Bus
-Make sure you have installed the [Azure SDK for Java][Azure SDK for Java] before building this sample.
-
-If you are using Eclipse, you can install the [Azure Toolkit for Eclipse][Azure Toolkit for Eclipse] that includes the Azure SDK for Java. You can then add the **Microsoft Azure Libraries for Java** to your project. If you are using IntelliJ, see [Install the Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/installation).
-
-![Add Microsoft Azure Libraries for Java to your Eclipse project](./media/service-bus-java-how-to-use-queues/eclipse-azure-libraries-java.png)
--
-Add the following `import` statements to the top of the Java file:
-
-```java
-// Include the following imports to use Service Bus APIs
-import com.google.gson.reflect.TypeToken;
-import com.microsoft.azure.servicebus.*;
-import com.microsoft.azure.servicebus.primitives.ConnectionStringBuilder;
-import com.google.gson.Gson;
-
-import static java.nio.charset.StandardCharsets.*;
-
-import java.time.Duration;
-import java.util.*;
-import java.util.concurrent.*;
-
-import org.apache.commons.cli.*;
-
-```
-
-## Send messages to a queue
-To send messages to a Service Bus Queue, your application instantiates a **QueueClient** object and sends messages asynchronously. The following code shows how to send a message for a Queue that was created through the portal.
-
-```java
-public void run() throws Exception {
- // Create a QueueClient instance and then asynchronously send messages.
- // Close the sender once the send operation is complete.
- QueueClient sendClient = new QueueClient(new ConnectionStringBuilder(ConnectionString, QueueName), ReceiveMode.PEEKLOCK);
- this.sendMessageAsync(sendClient).thenRunAsync(() -> sendClient.closeAsync());
-
- sendClient.close();
-}
-
- CompletableFuture<Void> sendMessagesAsync(QueueClient sendClient) {
- List<HashMap<String, String>> data =
- GSON.fromJson(
- "[" +
- "{'name' = 'Einstein', 'firstName' = 'Albert'}," +
- "{'name' = 'Heisenberg', 'firstName' = 'Werner'}," +
- "{'name' = 'Curie', 'firstName' = 'Marie'}," +
- "{'name' = 'Hawking', 'firstName' = 'Steven'}," +
- "{'name' = 'Newton', 'firstName' = 'Isaac'}," +
- "{'name' = 'Bohr', 'firstName' = 'Niels'}," +
- "{'name' = 'Faraday', 'firstName' = 'Michael'}," +
- "{'name' = 'Galilei', 'firstName' = 'Galileo'}," +
- "{'name' = 'Kepler', 'firstName' = 'Johannes'}," +
- "{'name' = 'Kopernikus', 'firstName' = 'Nikolaus'}" +
- "]",
- new TypeToken<List<HashMap<String, String>>>() {}.getType());
-
- List<CompletableFuture> tasks = new ArrayList<>();
- for (int i = 0; i < data.size(); i++) {
- final String messageId = Integer.toString(i);
- Message message = new Message(GSON.toJson(data.get(i), Map.class).getBytes(UTF_8));
- message.setContentType("application/json");
- message.setLabel("Scientist");
- message.setMessageId(messageId);
- message.setTimeToLive(Duration.ofMinutes(2));
- System.out.printf("\nMessage sending: Id = %s", message.getMessageId());
- tasks.add(
- sendClient.sendAsync(message).thenRunAsync(() -> {
- System.out.printf("\n\tMessage acknowledged: Id = %s", message.getMessageId());
- }));
- }
- return CompletableFuture.allOf(tasks.toArray(new CompletableFuture<?>[tasks.size()]));
- }
-
-```
-
-Messages sent to, and received from Service Bus queues are instances of the [Message](/java/api/com.microsoft.azure.servicebus.message) class. Message objects have a set of standard properties (such as Label and TimeToLive), a dictionary that is used to hold custom application-specific properties, and a body of arbitrary application data. An application can set the body of the message by passing any serializable object into the constructor of the Message, and the appropriate serializer will then be used to serialize the object. Alternatively, you can provide a **java.IO.InputStream** object.
--
-Service Bus queues support a maximum message size of 256 KB in the [Standard tier](service-bus-premium-messaging.md) and 100 MB in the [Premium tier](service-bus-premium-messaging.md). The header, which includes the standard and custom application properties, can have
-a maximum size of 64 KB. There is no limit on the number of messages
-held in a queue but there is a cap on the total size of the messages
-held by a queue. This queue size is defined at creation time, with an
-upper limit of 5 GB.
-
-## Receive messages from a queue
-The primary way to receive messages from a queue is to use a
-**ServiceBusContract** object. Received messages can work in two
-different modes: **ReceiveAndDelete** and **PeekLock**.
-
-When using the **ReceiveAndDelete** mode, receive is a single-shot
-operation - that is, when Service Bus receives a read request for a
-message in a queue, it marks the message as being consumed and returns
-it to the application. **ReceiveAndDelete** mode (which is the default
-mode) is the simplest model and works best for scenarios in which an
-application can tolerate not processing a message in the event of a
-failure. To understand this, consider a scenario in which the consumer
-issues the receive request and then crashes before processing it.
-Because Service Bus has marked the message as being consumed, then
-when the application restarts and begins consuming messages again, it
-has missed the message that was consumed prior to the crash.
-
-In **PeekLock** mode, receive becomes a two stage operation, which makes
-it possible to support applications that cannot tolerate missing
-messages. When Service Bus receives a request, it finds the next message
-to be consumed, locks it to prevent other consumers receiving it, and
-then returns it to the application. After the application finishes
-processing the message (or stores it reliably for future processing), it
-completes the second stage of the receive process by calling **complete()**
-on the received message. When Service Bus sees the **complete()** call, it
-marks the message as being consumed and remove it from the queue.
-
-The following example demonstrates how messages can be received and
-processed using **PeekLock** mode (not the default mode). The example
-below uses the callback model with a registered message handler
-and processes messages as they arrive into our `TestQueue`. This mode
-calls **complete()** automatically as the callback returns normally and calls
-**abandon()** if the callback throws an exception.
-
-```java
- public void run() throws Exception {
- // Create a QueueClient instance for receiving using the connection string builder
- // We set the receive mode to "PeekLock", meaning the message is delivered
- // under a lock and must be acknowledged ("completed") to be removed from the queue
- QueueClient receiveClient = new QueueClient(new ConnectionStringBuilder(ConnectionString, QueueName), ReceiveMode.PEEKLOCK);
- this.registerReceiver(receiveClient);
-
- // shut down receiver to close the receive loop
- receiveClient.close();
- }
- void registerReceiver(QueueClient queueClient) throws Exception {
- // register the RegisterMessageHandler callback
- queueClient.registerMessageHandler(new IMessageHandler() {
- // callback invoked when the message handler loop has obtained a message
- public CompletableFuture<Void> onMessageAsync(IMessage message) {
- // receives message is passed to callback
- if (message.getLabel() != null &&
- message.getContentType() != null &&
- message.getLabel().contentEquals("Scientist") &&
- message.getContentType().contentEquals("application/json")) {
-
- byte[] body = message.getBody();
- Map scientist = GSON.fromJson(new String(body, UTF_8), Map.class);
-
- System.out.printf(
- "\n\t\t\t\tMessage received: \n\t\t\t\t\t\tMessageId = %s, \n\t\t\t\t\t\tSequenceNumber = %s, \n\t\t\t\t\t\tEnqueuedTimeUtc = %s," +
- "\n\t\t\t\t\t\tExpiresAtUtc = %s, \n\t\t\t\t\t\tContentType = \"%s\", \n\t\t\t\t\t\tContent: [ firstName = %s, name = %s ]\n",
- message.getMessageId(),
- message.getSequenceNumber(),
- message.getEnqueuedTimeUtc(),
- message.getExpiresAtUtc(),
- message.getContentType(),
- scientist != null ? scientist.get("firstName") : "",
- scientist != null ? scientist.get("name") : "");
- }
- return CompletableFuture.completedFuture(null);
- }
-
- // callback invoked when the message handler has an exception to report
- public void notifyException(Throwable throwable, ExceptionPhase exceptionPhase) {
- System.out.printf(exceptionPhase + "-" + throwable.getMessage());
- }
- },
- // 1 concurrent call, messages are auto-completed, auto-renew duration
- new MessageHandlerOptions(1, true, Duration.ofMinutes(1)));
- }
-
-```
-
-## How to handle application crashes and unreadable messages
-Service Bus provides functionality to help you gracefully recover from
-errors in your application or difficulties processing a message. If a
-receiver application is unable to process the message for some reason,
-then it can call the **abandon()** method on client object with the
-received message's lock token obtained via **getLockToken()**. This
-causes Service Bus to unlock the message within the queue and make
-it available to be received again, either by the same consuming
-application or by another consuming application.
-
-There is also a timeout associated with a message locked within the
-queue, and if the application fails to process the message before the
-lock timeout expires (for example, if the application crashes), then Service
-Bus unlocks the message automatically and makes it available to be
-received again.
-
-In the event that the application crashes after processing the message
-but before the **complete()** request is issued, then the message
-is redelivered to the application when it restarts. This is often
-called *At Least Once Processing*; that is, each message is
-processed at least once but in certain situations the same message may
-be redelivered. If the scenario cannot tolerate duplicate processing,
-then application developers should add additional logic to their
-application to handle duplicate message delivery. This is often achieved
-using the **getMessageId** method of the message, which remains
-constant across delivery attempts.
-
-> [!NOTE]
-> You can manage Service Bus resources with [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/). The Service Bus Explorer allows users to connect to a Service Bus namespace and administer messaging entities in an easy manner. The tool provides advanced features like import/export functionality or the ability to test topic, queues, subscriptions, relay services, notification hubs and events hubs.
-
-## Next Steps
-You can find Java samples on GitHub in the [`azure-service-bus` repository](https://github.com/Azure/azure-service-bus/tree/master/samples/Java).
-
-[Azure SDK for Java]: /azure/developer/java/sdk/get-started
-[Azure Toolkit for Eclipse]: /azure/developer/java/toolkit-for-eclipse/installation
-[Queues, topics, and subscriptions]: service-bus-queues-topics-subscriptions.md
-[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
service-bus-messaging Service Bus Java How To Use Topics Subscriptions Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions-legacy.md
- Title: Use Azure Service Bus topics and subscriptions with Java
-description: In this article, you write Java code to send messages to an Azure Service Bus topic and then receive messages from subscriptions to that topic.
Previously updated : 07/27/2021----
-# Use Service Bus topics and subscriptions with Java (old package)
-In this article, you write Java code to send messages to an Azure Service Bus topic and then receive messages from subscriptions to that topic.
-
-> [!WARNING]
-> This article uses the old azure-servicebus packages. For an article that uses the latest azure-messaging-servicebus package, see [Send and receive messages using azure-messaging-servicebus](service-bus-java-how-to-use-topics-subscriptions.md).
--
-## Prerequisites
-
-1. An Azure subscription. To complete steps in this article, you need an Azure account. You can activate your [Visual Studio or MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A85619ABF) or sign-up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).
-2. Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscriptions to the topic](service-bus-quickstart-topics-subscriptions-portal.md) to do the following tasks:
- 1. Create a Service Bus **namespace**.
- 2. Get the **connection string**.
- 3. Create a **topic** in the namespace.
- 4. Create **three subscriptions** to the topic in the namespace.
-3. [Azure SDK for Java][Azure SDK for Java].
-
-## Configure your application to use Service Bus
-Make sure you have installed the [Azure SDK for Java][Azure SDK for Java] before building this sample. If you are using Eclipse, you can install the [Azure Toolkit for Eclipse][Azure Toolkit for Eclipse] that includes the Azure SDK for Java. You can then add the **Microsoft Azure Libraries for Java** to your project:
-
-![Add Microsoft Azure Libraries for Java to your Eclipse project](media/service-bus-java-how-to-use-topics-subscriptions-legacy/eclipse-azure-libraries-java.png)
-
-You also need to add the following JARs to the Java Build Path:
--- gson-2.6.2.jar-- commons-cli-1.4.jar-- proton-j-0.21.0.jar-
-Add a class with a **Main** method, and then add the following `import` statements at the top of the Java file:
-
-```java
-import com.google.gson.reflect.TypeToken;
-import com.microsoft.azure.servicebus.*;
-import com.microsoft.azure.servicebus.primitives.ConnectionStringBuilder;
-import com.google.gson.Gson;
-import static java.nio.charset.StandardCharsets.*;
-import java.time.Duration;
-import java.util.*;
-import java.util.concurrent.*;
-import java.util.function.Function;
-import org.apache.commons.cli.*;
-import org.apache.commons.cli.DefaultParser;
-```
-
-## Send messages to a topic
-Update the **main** method to create a **TopicClient** object, and invoke a helper method that asynchronously sends sample messages to the Service Bus topic.
-
-> [!NOTE]
-> - Replace `<NameOfServiceBusNamespace>` with the name of your Service Bus namespace.
-> - Replace `<AccessKey>` with the access key for your namespace.
-
-```java
-public class MyServiceBusTopicClient {
-
- static final Gson GSON = new Gson();
-
- public static void main(String[] args) throws Exception, ServiceBusException {
- // TODO Auto-generated method stub
-
- TopicClient sendClient;
- String connectionString = "Endpoint=sb://<NameOfServiceBusNamespace>.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=<AccessKey>";
- sendClient = new TopicClient(new ConnectionStringBuilder(connectionString, "BasicTopic"));
- sendMessagesAsync(sendClient).thenRunAsync(() -> sendClient.closeAsync());
- }
-
- static CompletableFuture<Void> sendMessagesAsync(TopicClient sendClient) {
- List<HashMap<String, String>> data =
- GSON.fromJson(
- "[" +
- "{'name' = 'Einstein', 'firstName' = 'Albert'}," +
- "{'name' = 'Heisenberg', 'firstName' = 'Werner'}," +
- "{'name' = 'Curie', 'firstName' = 'Marie'}," +
- "{'name' = 'Hawking', 'firstName' = 'Steven'}," +
- "{'name' = 'Newton', 'firstName' = 'Isaac'}," +
- "{'name' = 'Bohr', 'firstName' = 'Niels'}," +
- "{'name' = 'Faraday', 'firstName' = 'Michael'}," +
- "{'name' = 'Galilei', 'firstName' = 'Galileo'}," +
- "{'name' = 'Kepler', 'firstName' = 'Johannes'}," +
- "{'name' = 'Kopernikus', 'firstName' = 'Nikolaus'}" +
- "]",
- new TypeToken<List<HashMap<String, String>>>() {
- }.getType());
-
- List<CompletableFuture> tasks = new ArrayList<>();
- for (int i = 0; i < data.size(); i++) {
- final String messageId = Integer.toString(i);
- Message message = new Message(GSON.toJson(data.get(i), Map.class).getBytes(UTF_8));
- message.setContentType("application/json");
- message.setLabel("Scientist");
- message.setMessageId(messageId);
- message.setTimeToLive(Duration.ofMinutes(2));
- System.out.printf("Message sending: Id = %s\n", message.getMessageId());
- tasks.add(
- sendClient.sendAsync(message).thenRunAsync(() -> {
- System.out.printf("\tMessage acknowledged: Id = %s\n", message.getMessageId());
- }));
- }
- return CompletableFuture.allOf(tasks.toArray(new CompletableFuture<?>[tasks.size()]));
- }
-}
-```
-
-Service Bus topics support a maximum message size of 256 KB in the [Standard tier](service-bus-premium-messaging.md) and 100 MB in the [Premium tier](service-bus-premium-messaging.md). The header, which includes the standard and custom application properties, can have a maximum size of 64 KB. There is no limit on the number of messages held in a topic but there is a limit on the total size of the messages
-held by a topic. This topic size is defined at creation time, with an upper limit of 5 GB.
-
-## Receive messages from a subscription
-Update the **main** method to create three **SubscriptionClient** objects for three subscriptions, and invoke a helper method that asynchronously receives messages from the Service Bus topic. The sample code assumes that you created a topic named **BasicTopic** and three subscriptions named **Subscription1**, **Subscription2**, and **Subscription3**. If you used different names for them, update the code before testing it.
-
-```java
-import com.microsoft.azure.servicebus.*;
-import com.microsoft.azure.servicebus.primitives.ConnectionStringBuilder;
-import com.microsoft.azure.servicebus.primitives.ServiceBusException;
-import com.google.gson.Gson;
-import static java.nio.charset.StandardCharsets.*;
-import java.time.Duration;
-import java.util.*;
-import java.util.concurrent.*;
-
-public class MyServiceBusSubscriptionClient {
- static final Gson GSON = new Gson();
-
- public static void main(String[] args) throws Exception, ServiceBusException {
- String connectionString = "Endpoint=sb://<NameOfServiceBusNamespace>.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=<AccessKey>";
-
- SubscriptionClient subscription1Client = new SubscriptionClient(new ConnectionStringBuilder(connectionString, "BasicTopic/subscriptions/Subscription1"), ReceiveMode.PEEKLOCK);
- SubscriptionClient subscription2Client = new SubscriptionClient(new ConnectionStringBuilder(connectionString, "BasicTopic/subscriptions/Subscription2"), ReceiveMode.PEEKLOCK);
- SubscriptionClient subscription3Client = new SubscriptionClient(new ConnectionStringBuilder(connectionString, "BasicTopic/subscriptions/Subscription3"), ReceiveMode.PEEKLOCK);
-
- ExecutorService executorService = Executors.newCachedThreadPool();
- registerMessageHandlerOnClient(subscription1Client, executorService);
- registerMessageHandlerOnClient(subscription2Client, executorService);
- registerMessageHandlerOnClient(subscription3Client, executorService);
- }
-
- static void registerMessageHandlerOnClient(SubscriptionClient receiveClient, ExecutorService executorService) throws Exception {
- // register the RegisterMessageHandler callback
- receiveClient.registerMessageHandler(
- new IMessageHandler() {
- // callback invoked when the message handler loop has obtained a message
- public CompletableFuture<Void> onMessageAsync(IMessage message) {
- // receives message is passed to callback
- if (message.getLabel() != null &&
- message.getContentType() != null &&
- message.getLabel().contentEquals("Scientist") &&
- message.getContentType().contentEquals("application/json")) {
-
- byte[] body = message.getBody();
- Map scientist = GSON.fromJson(new String(body, UTF_8), Map.class);
-
- System.out.printf(
- "\n\t\t\t\t%s Message received: \n\t\t\t\t\t\tMessageId = %s, \n\t\t\t\t\t\tSequenceNumber = %s, \n\t\t\t\t\t\tEnqueuedTimeUtc = %s," +
- "\n\t\t\t\t\t\tExpiresAtUtc = %s, \n\t\t\t\t\t\tContentType = \"%s\", \n\t\t\t\t\t\tContent: [ firstName = %s, name = %s ]\n",
- receiveClient.getEntityPath(),
- message.getMessageId(),
- message.getSequenceNumber(),
- message.getEnqueuedTimeUtc(),
- message.getExpiresAtUtc(),
- message.getContentType(),
- scientist != null ? scientist.get("firstName") : "",
- scientist != null ? scientist.get("name") : "");
- }
- return receiveClient.completeAsync(message.getLockToken());
- }
-
- // callback invoked when the message handler has an exception to report
- public void notifyException(Throwable throwable, ExceptionPhase exceptionPhase) {
- System.out.printf(exceptionPhase + "-" + throwable.getMessage());
- }
- },
- // 1 concurrent call, messages are auto-completed, auto-renew duration
- new MessageHandlerOptions(1, false, Duration.ofMinutes(1)),
- executorService);
- }
-}
-```
-
-## Run the program
-Run the program to see the output similar to the following output:
-
-```java
-Message sending: Id = 0
-Message sending: Id = 1
-Message sending: Id = 2
-Message sending: Id = 3
-Message sending: Id = 4
-Message sending: Id = 5
-Message sending: Id = 6
-Message sending: Id = 7
-Message sending: Id = 8
-Message sending: Id = 9
- Message acknowledged: Id = 0
- Message acknowledged: Id = 9
- Message acknowledged: Id = 7
- Message acknowledged: Id = 8
- Message acknowledged: Id = 5
- Message acknowledged: Id = 6
- Message acknowledged: Id = 3
- Message acknowledged: Id = 2
- Message acknowledged: Id = 4
- Message acknowledged: Id = 1
-
- BasicTopic/subscriptions/Subscription1 Message received:
- MessageId = 0,
- SequenceNumber = 11,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.442Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.442Z,
- ContentType = "application/json",
- Content: [ firstName = Albert, name = Einstein ]
-
- BasicTopic/subscriptions/Subscription2 Message received:
- MessageId = 0,
- SequenceNumber = 11,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.442Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.442Z,
- ContentType = "application/json",
- Content: [ firstName = Albert, name = Einstein ]
-
- BasicTopic/subscriptions/Subscription1 Message received:
- MessageId = 9,
- SequenceNumber = 12,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Nikolaus, name = Kopernikus ]
-
- BasicTopic/subscriptions/Subscription1 Message received:
- MessageId = 8,
- SequenceNumber = 13,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Johannes, name = Kepler ]
-
- BasicTopic/subscriptions/Subscription3 Message received:
- MessageId = 0,
- SequenceNumber = 11,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.442Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.442Z,
- ContentType = "application/json",
- Content: [ firstName = Albert, name = Einstein ]
-
- BasicTopic/subscriptions/Subscription2 Message received:
- MessageId = 9,
- SequenceNumber = 12,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Nikolaus, name = Kopernikus ]
-
- BasicTopic/subscriptions/Subscription1 Message received:
- MessageId = 7,
- SequenceNumber = 14,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Galileo, name = Galilei ]
-
- BasicTopic/subscriptions/Subscription3 Message received:
- MessageId = 9,
- SequenceNumber = 12,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Nikolaus, name = Kopernikus ]
-
- BasicTopic/subscriptions/Subscription2 Message received:
- MessageId = 8,
- SequenceNumber = 13,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Johannes, name = Kepler ]
-
- BasicTopic/subscriptions/Subscription1 Message received:
- MessageId = 6,
- SequenceNumber = 15,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Michael, name = Faraday ]
-
- BasicTopic/subscriptions/Subscription3 Message received:
- MessageId = 8,
- SequenceNumber = 13,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Johannes, name = Kepler ]
-
- BasicTopic/subscriptions/Subscription2 Message received:
- MessageId = 7,
- SequenceNumber = 14,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Galileo, name = Galilei ]
-
- BasicTopic/subscriptions/Subscription1 Message received:
- MessageId = 5,
- SequenceNumber = 16,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Niels, name = Bohr ]
-
- BasicTopic/subscriptions/Subscription3 Message received:
- MessageId = 7,
- SequenceNumber = 14,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Galileo, name = Galilei ]
-
- BasicTopic/subscriptions/Subscription2 Message received:
- MessageId = 6,
- SequenceNumber = 15,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Michael, name = Faraday ]
-
- BasicTopic/subscriptions/Subscription1 Message received:
- MessageId = 4,
- SequenceNumber = 17,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Isaac, name = Newton ]
-
- BasicTopic/subscriptions/Subscription3 Message received:
- MessageId = 6,
- SequenceNumber = 15,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Michael, name = Faraday ]
-
- BasicTopic/subscriptions/Subscription2 Message received:
- MessageId = 5,
- SequenceNumber = 16,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Niels, name = Bohr ]
-
- BasicTopic/subscriptions/Subscription1 Message received:
- MessageId = 3,
- SequenceNumber = 18,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Steven, name = Hawking ]
-
- BasicTopic/subscriptions/Subscription3 Message received:
- MessageId = 5,
- SequenceNumber = 16,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Niels, name = Bohr ]
-
- BasicTopic/subscriptions/Subscription2 Message received:
- MessageId = 4,
- SequenceNumber = 17,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Isaac, name = Newton ]
-
- BasicTopic/subscriptions/Subscription1 Message received:
- MessageId = 2,
- SequenceNumber = 19,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Marie, name = Curie ]
-
- BasicTopic/subscriptions/Subscription3 Message received:
- MessageId = 4,
- SequenceNumber = 17,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Isaac, name = Newton ]
-
- BasicTopic/subscriptions/Subscription2 Message received:
- MessageId = 3,
- SequenceNumber = 18,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Steven, name = Hawking ]
-
- BasicTopic/subscriptions/Subscription1 Message received:
- MessageId = 1,
- SequenceNumber = 20,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Werner, name = Heisenberg ]
-
- BasicTopic/subscriptions/Subscription2 Message received:
- MessageId = 2,
- SequenceNumber = 19,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Marie, name = Curie ]
-
- BasicTopic/subscriptions/Subscription3 Message received:
- MessageId = 3,
- SequenceNumber = 18,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Steven, name = Hawking ]
-
- BasicTopic/subscriptions/Subscription3 Message received:
- MessageId = 2,
- SequenceNumber = 19,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Marie, name = Curie ]
-
- BasicTopic/subscriptions/Subscription2 Message received:
- MessageId = 1,
- SequenceNumber = 20,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Werner, name = Heisenberg ]
-
- BasicTopic/subscriptions/Subscription3 Message received:
- MessageId = 1,
- SequenceNumber = 20,
- EnqueuedTimeUtc = 2018-10-29T18:58:12.520Z,
- ExpiresAtUtc = 2018-10-29T19:00:12.520Z,
- ContentType = "application/json",
- Content: [ firstName = Werner, name = Heisenberg ]
-```
-
-> [!NOTE]
-> You can manage Service Bus resources with [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/). The Service Bus Explorer allows users to connect to a Service Bus namespace and administer messaging entities in an easy manner. The tool provides advanced features like import/export functionality or the ability to test topic, queues, subscriptions, relay services, notification hubs and events hubs.
-
-## Next steps
-For more information, see [Service Bus queues, topics, and subscriptions][Service Bus queues, topics, and subscriptions].
-
-[Azure SDK for Java]: /java/api/overview/azure/
-[Azure Toolkit for Eclipse]: /azure/developer/java/toolkit-for-eclipse/installation
-[Service Bus queues, topics, and subscriptions]: service-bus-queues-topics-subscriptions.md
-[SqlFilter]: /dotnet/api/microsoft.azure.servicebus.sqlfilter
-[SqlFilter.SqlExpression]: /dotnet/api/microsoft.azure.servicebus.sqlfilter.sqlexpression
-[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage
-
-[0]: ./media/service-bus-java-how-to-use-topics-subscriptions-legacy/sb-queues-13.png
-[2]: ./media/service-bus-java-how-to-use-topics-subscriptions-legacy/sb-queues-04.png
-[3]: ./media/service-bus-java-how-to-use-topics-subscriptions-legacy/sb-queues-09.png
service-fabric Service Fabric Patterns Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-patterns-networking.md
The above GitHub templates are designed to work with the default SKU for Azure S
## Next steps [Create a cluster](service-fabric-cluster-creation-via-arm.md)-
-After deployment, you can see two load balancers in the resource group. If you browse the load balancers, you can see the public IP address and management endpoints (ports 19000 and 19080) assigned to the public IP address. You also can see the static internal IP address and application endpoint (port 80) assigned to the internal load balancer. Both load balancers use the same virtual machine scale set back-end pool.
service-fabric Service Fabric Production Readiness Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-production-readiness-checklist.md
Is your application and cluster ready to take production traffic? Running and te
1. Use a D2v2 or higher SKU for the primary node type. It is recommended to pick a SKU with at least 50 GB hard disk capacity. 1. Production clusters must be [secure](service-fabric-cluster-security.md). For an example of setting up a secure cluster, see this [cluster template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/7-VM-Windows-3-NodeTypes-Secure-NSG). Use common names for certificates and avoid using self signed certs. 1. Add [resource constraints on containers and services](service-fabric-resource-governance.md), so that they don't consume more than 75% of node resources.
-1. Understand and set the [durability level](service-fabric-cluster-capacity.md#durability-characteristics-of-the-cluster). Silver or higher durability level is recommended for node types running stateful workloads.
-1. Understand and pick the [reliability level](service-fabric-cluster-capacity.md#reliability-characteristics-of-the-cluster) of the node type. Silver or higher reliability is recommended.
+1. Understand and set the [durability level](service-fabric-cluster-capacity.md#durability-characteristics-of-the-cluster). Silver or higher durability level is recommended for node types running stateful workloads, and required for production.
+1. Understand and pick the [reliability level](service-fabric-cluster-capacity.md#reliability-characteristics-of-the-cluster) of the node type. Silver or higher reliability is recommended, and required for production.
1. Load and scale test your workloads to identify [capacity requirements](service-fabric-cluster-capacity.md) for your cluster. 1. Your services and applications are monitored and application logs are being generated and stored, with alerting. For example, see [Add logging to your Service Fabric application](service-fabric-how-to-diagnostics-log.md) and [Monitor containers with Azure Monitor logs](service-fabric-diagnostics-oms-containers.md). 1. The cluster is monitored with alerting (for example, with [Azure Monitor logs](service-fabric-diagnostics-event-analysis-oms.md)).
service-fabric Service Fabric Stateless Node Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-stateless-node-types.md
To set one or more node types as stateless in a cluster resource, set the **isSt
To enable stateless node types, you should configure the underlying virtual machine scale set resource in the following way: * The value **singlePlacementGroup** property, which should be set to **false** if you require to scale to more than 100 VMs.
-* The Scale set's **upgradeMode** should be set to **Rolling**.
+* The Scale set's **upgradePolicy** should be set to **Rolling**.
* Rolling Upgrade Mode requires Application Health Extension or Health probes configured. For more details on configuring the health probes or the application health extension refer to this [doc](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md#how-does-automatic-os-image-upgrade-work). Configure health probe with the default configuration for Stateless Node types as suggested below. Once the applications are deployed to the node type, Health Probe/Health extension ports can be changed to monitor the actual application health. >[!NOTE]
To enable stateless node types, you should configure the underlying virtual mach
To configure Stateless node type spanning across multiple availability zones follow the documentation [here](./service-fabric-cross-availability-zones.md#1-preview-enable-multiple-availability-zones-in-single-virtual-machine-scale-set), along with the few changes as follows: * Set **singlePlacementGroup** : **false** if multiple placement groups is required to be enabled.
-* Set **upgradeMode** : **Rolling** and add Application Health Extension/Health Probes as mentioned above.
+* Set **upgradePolicy** : **Rolling** and add Application Health Extension/Health Probes as mentioned above.
* Set **platformFaultDomainCount** : **5** for virtual machine scale set. For reference, look at the [template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/15-VM-2-NodeTypes-Windows-Stateless-CrossAZ-Secure) for configuring Stateless node types with multiple Availability Zones
site-recovery Upgrade Mobility Service Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/upgrade-mobility-service-preview.md
Last updated 09/01/2021
# Upgrade Mobility Service and Appliance components (Preview)
-From this preview, you do not need to maintain source machine's Root/Admin credentials are not required for performing upgrades. The credentials are required only for the initial installation of the agent. Once done, you can remove the credentials.
+From this preview, you do not need to maintain source machine's Root/Admin credentials for performing upgrades. The credentials are required only for the initial installation of the agent on source machines. Once done, you can remove the credentials and the upgrades will occur automatically.
## Update mobility agent automatically
storage Access Tiers Online Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-online-manage.md
description: Learn how to specify a blob's access tier when you upload it, or ho
Previously updated : 10/25/2021 Last updated : 03/02/2022
$ctx = (Get-AzStorageAccount `
-ResourceGroupName $rgName ` -Name $accountName).Context
-# Copy the source blob to a new destination blob in Hot tier with Standard priority.
+# Copy the source blob to a new destination blob in Hot tier.
Start-AzStorageBlobCopy -SrcContainer $srcContainerName ` -SrcBlob $srcBlobName ` -DestContainer $destContainerName `
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
Usage scenarios for the Cool access tier include:
- Older data sets that are not used frequently, but are expected to be available for immediate access. - Large data sets that need to be stored in a cost-effective way while additional data is being gathered for processing.
+To learn how to move a blob to the Hot or Cool tier, see [Set a blob's access tier](access-tiers-online-manage.md).
+ Data in the Cool tier has slightly lower availability, but offers the same high durability, retrieval latency, and throughput characteristics as the Hot tier. For data in the Cool tier, slightly lower availability and higher access costs may be acceptable trade-offs for lower overall storage costs, as compared to the Hot tier. For more information, see [SLA for storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/). A blob in the Cool tier in a general-purpose v2 accounts is subject to an early deletion penalty if it is deleted or moved to a different tier before 30 days has elapsed. This charge is prorated. For example, if a blob is moved to the Cool tier and then deleted after 21 days, you'll be charged an early deletion fee equivalent to 9 (30 minus 21) days of storing that blob in the Cool tier.
-The Hot and Cool tiers support all redundancy configurations. For more information about data redundancy options in Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
+The Hot and Cool tiers support all redundancy configurations. For more information about data redundancy options in Azure Storage, see [Azure Storage redundancy](../common/storage-redundancy.md).
## Archive access tier
The Archive tier is an offline tier for storing data that is rarely accessed. Th
- Original (raw) data that must be preserved, even after it has been processed into final usable form - Compliance and archival data that needs to be stored for a long time and is hardly ever accessed
+To learn how to move a blob to the Archive tier, see [Archive a blob](archive-blob.md).
+ Data must remain in the Archive tier for at least 180 days or be subject to an early deletion charge. For example, if a blob is moved to the Archive tier and then deleted or moved to the Hot tier after 45 days, you'll be charged an early deletion fee equivalent to 135 (180 minus 45) days of storing that blob in the Archive tier. While a blob is in the Archive tier, it can't be read or modified. To read or download a blob in the Archive tier, you must first rehydrate it to an online tier, either Hot or Cool. Data in the Archive tier can take up to 15 hours to rehydrate, depending on the priority you specify for the rehydration operation. For more information about blob rehydration, see [Overview of blob rehydration from the Archive tier](archive-rehydrate-overview.md).
For information about feature support by region, see [Azure products available b
## Next steps -- [How to manage the tier of a blob in an Azure Storage account](manage-access-tier.md)-- [How to manage the default account access tier of an Azure Storage account](../common/manage-account-default-access-tier.md)-- [Optimize costs by automating Azure Blob Storage access tiers](./lifecycle-management-overview.md)
+- [Set a blob's access tier](access-tiers-online-manage.md)
+- [Archive a blob](archive-blob.md)
+- [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md)
storage Archive Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-blob.md
Previously updated : 10/25/2021 Last updated : 03/01/2022
az storage blob upload-batch \
--auth-mode login ```
+### [AzCopy](#tab/azcopy)
+
+To archive a single blob on upload with AzCopy, call the [azcopy copy](../common/storage-ref-azcopy-copy.md) command. Provide a local file as the source and the target blob URI as the destination, and specify the Archive tier as the target tier, as shown in the following example. Remember to replace the placeholder values in brackets with your own values:
+
+```azcopy
+azcopy copy "C:\temp\myTextFile.txt" "https://<storage-account>.blob.core.windows.net/<container>/myTextFile-archived.txt" --blob-type BlockBlob --block-blob-tier Archive
+```
+
+For additional examples, see [Upload files to Azure Blob storage by using AzCopy](../common/storage-use-azcopy-blobs-upload.md).
+ ## Archive an existing blob
az storage blob set-tier \
--auth-mode login ```
+### [AzCopy](#tab/azcopy)
+
+N/A
+ ### Archive an existing blob with a copy operation
N/A
#### [PowerShell](#tab/azure-powershell)
-To copy an blob from an online tier to the Archive tier with PowerShell, call the [Start-AzStorageBlobCopy](/powershell/module/az.storage/start-azstorageblobcopy) command and specify the Archive tier. Remember to replace placeholders in angle brackets with your own values:
+To copy a blob from an online tier to the Archive tier with PowerShell, call the [Start-AzStorageBlobCopy](/powershell/module/az.storage/start-azstorageblobcopy) command and specify the Archive tier. Remember to replace placeholders in angle brackets with your own values:
```azurepowershell # Initialize these variables with your values.
$ctx = (Get-AzStorageAccount `
-ResourceGroupName $rgName ` -Name $accountName).Context
-# Copy the source blob to a new destination blob in Hot tier with Standard priority.
+# Copy the source blob to a new destination blob in Archive tier.
Start-AzStorageBlobCopy -SrcContainer $srcContainerName ` -SrcBlob $srcBlobName ` -DestContainer $destContainerName `
Start-AzStorageBlobCopy -SrcContainer $srcContainerName `
#### [Azure CLI](#tab/azure-cli)
-To copy an blob from an online tier to the Archive tier with Azure CLI, call the [az storage blob copy start](/cli/azure/storage/blob/copy#az_storage_blob_copy_start) command and specify the Archive tier. Remember to replace placeholders in angle brackets with your own values:
+To copy a blob from an online tier to the Archive tier with Azure CLI, call the [az storage blob copy start](/cli/azure/storage/blob/copy#az_storage_blob_copy_start) command and specify the Archive tier. Remember to replace placeholders in angle brackets with your own values:
```azurecli az storage blob copy start \
az storage blob copy start \
--auth-mode login ```
+#### [AzCopy](#tab/azcopy)
+
+To copy a blob from an online tier to the Archive tier with AzCopy, specify the URI for the source blob and the URI for the destination blob. The destination blob should have a different name from the source blob, and should not already exist.
+
+When the copy source is a blob, you must provide an account SAS token on the source blob. If you are using Azure Active Directory (Azure AD) to authorize the copy operation, then the SAS is required only on the source blob, as shown in the following example. If you are using the account access key to authorize the copy operation, then you must provide a SAS token on both the source and destination blobs. For more information, see [Copy blobs between Azure storage accounts by using AzCopy](../common/storage-use-azcopy-blobs-copy.md).
+
+Remember to replace placeholders in angle brackets with your own values:
+
+```azcopy
+azcopy copy "https://<source-account>.blob.core.windows.net/sample-container/blob1.txt?sv=2020-08-04&ss=b&srt=sco&sp=r&se=2022-03-02T05:21:32Z&st=2022-03-01T21:21:32Z&spr=https&sig=<signature>" "https://<dest-account>.blob.core.windows.net/sample-container/blob1-archived.txt" --blob-type BlockBlob --block-blob-tier Archive
+```
+ ## Bulk archive
To archive blobs with a batch operation, use one of the Azure Storage client lib
For an in-depth sample application that shows how to change tiers with a batch operation, see [AzBulkSetBlobTier](/samples/azure/azbulksetblobtier/azbulksetblobtier/).
+## Use lifecycle management policies to archive blobs
+
+You can optimize costs for blob data that is rarely accessed by creating lifecycle management policies that automatically move blobs to the Archive tier when they have not been accessed or modified for a specified period of time. After you configure a lifecycle management policy, Azure Storage runs it once per day. For more information about lifecycle management policies, see [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md).
+
+You can use the Azure portal, PowerShell, Azure CLI, or an Azure Resource Manager template to create a lifecycle management policy. For simplicity, this section shows how to create a lifecycle management policy in the Azure portal only. For additional examples showing how to create lifecycle management policies, see [Configure a lifecycle management policy](lifecycle-management-policy-configure.md).
+
+> [!CAUTION]
+> Before you use a lifecycle management policy to move data to the Archive tier, verify that that data does not need to be deleted or moved to another tier for at least 180 days. Data that is deleted or moved to a different tier before the 180 day period has elapsed is subject to an early deletion fee.
+>
+> Also keep in mind that data in the Archive tier must be rehydrated before it can be read or modified. Rehydrating a blob from the Archive tier can take several hours and has associated costs.
+
+To create a lifecycle management policy to archive blobs in the Azure portal, follow these steps:
+
+1. Navigate to your storage account in the portal.
+1. Under **Data management**, locate the **Lifecycle management** settings.
+1. Select the **Add a rule** button.
+1. On the **Details** tab, specify a name for your rule.
+1. Specify the rule scope: either **Apply rule to all blobs in your storage account**, or **Limit blobs with filters**.
+1. Select the types of blobs for which the rule is to be applied, and specify whether to include blob snapshots or versions.
+
+ :::image type="content" source="media/archive-blob/lifecycle-policy-details-tab-portal.png" alt-text="Screenshot showing how to configure a lifecycle management policy - Details tab.":::
+
+1. Depending on your selections, you can configure rules for base blobs (current versions), previous versions, or blob snapshots. Specify one of two conditions to check for:
+
+ - Objects were last modified some number of days ago.
+ - Objects were last accessed some number of days ago.
+
+ Only one of these conditions can be applied to move a particular type of object to the Archive tier per rule. For example, if you define an action that archives base blobs if they have not been modified for 90 days, then you cannot also define an action that archives base blobs if they have not been accessed for 90 days. Similarly, you can define one action per rule with either of these conditions to archive previous versions, and one to archive snapshots.
+
+1. Next, specify the number of days to elapse after the object is modified or accessed.
+1. Specify that the object is to be moved to the Archive tier after the interval has elapsed.
+
+ :::image type="content" source="media/archive-blob/lifecycle-policy-base-blobs-tab-portal.png" alt-text="Screenshot showing how to configure a lifecycle management policy - Base blob tab.":::
+
+1. If you chose to limit the blobs affected by the rule with filters, you can specify a filter, either with a blob prefix or blob index match.
+1. Select the **Add** button to add the rule to the policy.
+
+After you create the lifecycle management policy, you can view the JSON for the policy on the **Lifecycle management** page by switching from **List view** to **Code view**.
+
+Here's the JSON for the simple lifecycle management policy created in the images shown above:
+
+```json
+{
+ "rules": [
+ {
+ "enabled": true,
+ "name": "sample-archive-rule",
+ "type": "Lifecycle",
+ "definition": {
+ "actions": {
+ "baseBlob": {
+ "tierToArchive": {
+ "daysAfterLastAccessTimeGreaterThan": 90
+ }
+ }
+ },
+ "filters": {
+ "blobTypes": [
+ "blockBlob"
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+ ## See also - [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md)
storage Archive Rehydrate Handle Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-handle-event.md
Previously updated : 10/25/2021 Last updated : 02/28/2022
For more information about rehydrating blobs from the Archive tier, see [Overvie
## Prerequisites
-This article shows you how to use [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) to develop an Azure Function with .NET. You can install Visual Studio Community for free. Make sure that you [configure Visual Studio for Azure Development with .NET](/dotnet/azure/configure-visual-studio).
+This article shows you how to use [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or later to develop an Azure Function with .NET. You can install Visual Studio Community for free. Make sure that you [configure Visual Studio for Azure Development with .NET](/dotnet/azure/configure-visual-studio).
To debug the Azure Function locally, you will need to use a tool that can send an HTTP request, such as Postman.
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
Previously updated : 10/25/2021 Last updated : 03/01/2022
While a blob is in the Archive access tier, it's considered to be offline and ca
- [Copy an archived blob to an online tier](#copy-an-archived-blob-to-an-online-tier): You can rehydrate an archived blob by copying it to a new blob in the Hot or Cool tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation. Microsoft recommends this option for most scenarios. -- [Change a blob's access tier to an online tier](#change-a-blobs-access-tier-to-an-online-tier): You can rehydrate an archived blob to the Hot or Cool tier by changing its tier using the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation.
+- [Change an archived blob's access tier to an online tier](#change-a-blobs-access-tier-to-an-online-tier): You can rehydrate an archived blob to the Hot or Cool tier by changing its tier using the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation.
-Rehydrating a blob from the Archive tier can take several hours to complete. Microsoft recommends rehydrating larger blobs for optimal performance. Rehydrating several small blobs concurrently may require additional time. A maximum of 10 GiB per storage account may be rehydrated per hour.
+Rehydrating a blob from the Archive tier can take several hours to complete. Microsoft recommends archiving larger blobs for optimal performance when rehydrating. Rehydrating a large number of small blobs may require additional time due to the processing overhead on each blob. A maximum of 10 GiB per storage account may be rehydrated per hour with priority retrieval.
-You can configure [Azure Event Grid](../../event-grid/overview.md) to raise an event when you rehydrate a blob from the Archive tier to an online tier and to send the event to an event handler. For more information, see [Handle an event on blob rehydration](#handle-an-event-on-blob-rehydration).
-
-For more information about access tiers in Azure Storage, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+To learn how rehydrate an archived blob to an online tier, see [Rehydrate an archived blob to an online tier](archive-rehydrate-to-online-tier.md).
## Rehydration priority
During the blob rehydration operation, you can call the [Get Blob Properties](/r
## Handle an event on blob rehydration
-Rehydration of an archived blob may take up to 15 hours, and repeatedly polling **Get Blob Properties** to determine whether rehydration is complete is inefficient. Using [Azure Event Grid](../../event-grid/overview.md) to capture the event that fires when rehydration is complete offers better performance and cost optimization.
+Rehydration of an archived blob may take up to 15 hours, and repeatedly polling **Get Blob Properties** to determine whether rehydration is complete is inefficient. Microsoft recommends that you use [Azure Event Grid](../../event-grid/overview.md) to capture the event that fires when rehydration is complete for better performance and cost optimization.
Azure Event Grid raises one of the following two events on blob rehydration, depending on which operation was used to rehydrate the blob:
For more information about pricing for block blobs and data rehydration, see [Az
## See also -- [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+- [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md)
+- [Archive a blob](archive-blob.md)
- [Rehydrate an archived blob to an online tier](archive-rehydrate-to-online-tier.md) - [Run an Azure Function in response to a blob rehydration event](archive-rehydrate-handle-event.md) - [Reacting to Blob storage events](storage-blob-event-overview.md)
storage Archive Rehydrate To Online Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-to-online-tier.md
Previously updated : 11/01/2021 Last updated : 03/01/2022
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md
Create a directory on your Linux system, and then mount the container in the sto
1. On your Linux system, create a directory: ```
- mkdir -p /mnt/test
+ mkdir -p /nfsdata
```
-2. Mount the container by using the following command:
+2. Mount the container by using one of the following methods. In both methods, replace the `<storage-account-name>` placeholder with the name of your storage account, and replace `<container-name>` with the name of your container.
- ```
- mount -o sec=sys,vers=3,nolock,proto=tcp <storage-account-name>.blob.core.windows.net:/<storage-account-name>/<container-name> /mnt/test
- ```
+ - To have the share mounted automatically on reboot:
- - Replace the `<storage-account-name>` placeholder that appears in this command with the name of your storage account.
+ 1. Create an entry in the /etc/fstab file by adding the following line:
+
+ ```
+ <storage-account-name>.blob.core.windows.net:/<storage-account-name>/<container-name> /nfsdata nfs defaults,sec=sys,vers=3,nolock,proto=tcp,nofail 0 0
+ ```
- - Replace the `<container-name>` placeholder with the name of your container.
+ 1. Run the following command to immediately process the /etc/fstab entries and attempt to mount the preceding path:
+
+ ```
+ mount /nfsdata
+ ```
+
+ - For a temporary mount that doesn't persist across reboots, run the following command:
+
+ ```
+ mount -o sec=sys,vers=3,nolock,proto=tcp <storage-account-name>.blob.core.windows.net:/<storage-account-name>/<container-name> /nfsdata
+ ```
## Resolve common errors
Create a directory on your Linux system, and then mount the container in the sto
||| |`Access denied by server while mounting`|Ensure that your client is running within a supported subnet. See [Supported network locations](network-file-system-protocol-support.md#supported-network-connections).| |`No such file or directory`| Make sure to type, rather than copy and paste, the mount command and its parameters directly into the terminal. If you copy and paste any part of this command into the terminal from another application, hidden characters in the pasted information might cause this error to appear. This error also might appear if the account isn't enabled for NFS 3.0.|
-|`Permission denied`| The default mode of a newly created NFS 3.0 container is 0750. Non-root users don't have access to the volume. If access from non-root users is required, root users must change the mode to 0755. Sample command: `sudo chmod 0755 /mnt/<newcontainer>`|
+|`Permission denied`| The default mode of a newly created NFS 3.0 container is 0750. Non-root users don't have access to the volume. If access from non-root users is required, root users must change the mode to 0755. Sample command: `sudo chmod 0755 /nfsdata`|
|`EINVAL ("Invalid argument"`) |This error can appear when a client attempts to:<li>Write to a blob that was created from a blob endpoint.<li>Delete a blob that has a snapshot or is in a container that has an active WORM (write once, read many) policy.| |`EROFS ("Read-only file system"`) |This error can appear when a client attempts to:<li>Write to a blob or delete a blob that has an active lease.<li>Write to a blob or delete a blob in a container that has an active WORM policy. | |`NFS3ERR_IO/EIO ("Input/output error"`) |This error can appear when a client attempts to read, write, or set attributes on blobs that are stored in the archive access tier. | |`OperationNotSupportedOnSymLink` error| This error can be returned during a write operation via a Blob Storage or Azure Data Lake Storage Gen2 API. Using these APIs to write or delete symbolic links that are created by using NFS 3.0 is not allowed. Make sure to use the NFS 3.0 endpoint to work with symbolic links. |
-|`mount: /mnt/test: bad option;`| Install the NFS helper program by using `sudo apt install nfs-common`.|
+|`mount: /nfsdata: bad option;`| Install the NFS helper program by using `sudo apt install nfs-common`.|
## See also
storage Soft Delete Blob Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-enable.md
Blob soft delete is disabled by default for a new storage account. You can enabl
To enable blob soft delete for your storage account by using the Azure portal, follow these steps: 1. In the [Azure portal](https://portal.azure.com/), navigate to your storage account.
-1. Locate the **Data Protection** option under **Blob service**.
+1. Locate the **Data Protection** option under **Data management**.
1. In the **Recovery** section, select **Turn on soft delete for blobs**. 1. Specify a retention period between 1 and 365 days. Microsoft recommends a minimum retention period of seven days. 1. Save your changes.
storage Storage Blob Index How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-index-how-to.md
This task can be performed by a [Storage Blob Data Owner](../../role-based-acces
The following example shows how to create an append blob with tags set during creation. ```csharp
-static async Task BlobIndexTagsOnCreate()
- {
- BlobServiceClient serviceClient = new BlobServiceClient(ConnectionString);
- BlobContainerClient container = serviceClient.GetBlobContainerClient("mycontainer");
-
- try
- {
- // Create a container
- await container.CreateIfNotExistsAsync();
-
- // Create an append blob
- AppendBlobClient appendBlobWithTags = container.GetAppendBlobClient("myAppendBlob0.logs");
-
- // Blob index tags to upload
- AppendBlobCreateOptions appendOptions = new AppendBlobCreateOptions();
- appendOptions.Tags = new Dictionary<string, string>
- {
- { "Sealed", "false" },
- { "Content", "logs" },
- { "Date", "2020-04-20" }
- };
-
- // Upload data with tags set on creation
- await appendBlobWithTags.CreateAsync(appendOptions);
- }
- finally
- {
- }
- }
+static async Task BlobIndexTagsOnCreateAsync()
+{
+ var serviceClient = new BlobServiceClient(ConnectionString);
+ var container = serviceClient.GetBlobContainerClient("mycontainer");
+
+ // Create a container
+ await container.CreateIfNotExistsAsync();
+
+ // Create an append blob
+ AppendBlobClient appendBlobWithTags = container.GetAppendBlobClient("myAppendBlob0.logs");
+
+ // Blob index tags to upload
+ AppendBlobCreateOptions appendOptions = new AppendBlobCreateOptions();
+ appendOptions.Tags = new Dictionary<string, string>
+ {
+ { "Sealed", "false" },
+ { "Content", "logs" },
+ { "Date", "2020-04-20" }
+ };
+
+ // Upload data with tags set on creation
+ await appendBlobWithTags.CreateAsync(appendOptions);
+}
```
Setting and updating blob index tags can be performed by a [Storage Blob Data Ow
```csharp static async Task BlobIndexTagsExample()
- {
- BlobServiceClient serviceClient = new BlobServiceClient(ConnectionString);
- BlobContainerClient container = serviceClient.GetBlobContainerClient("mycontainer");
-
- try
- {
- // Create a container
- await container.CreateIfNotExistsAsync();
-
- // Create a new append blob
- AppendBlobClient appendBlob = container.GetAppendBlobClient("myAppendBlob1.logs");
- await appendBlob.CreateAsync();
-
- // Set or update blob index tags on existing blob
- Dictionary<string, string> tags = new Dictionary<string, string>
- {
- { "Project", "Contoso" },
- { "Status", "Unprocessed" },
- { "Sealed", "true" }
- };
- await appendBlob.SetTagsAsync(tags);
-
- // Get blob index tags
- Response<IDictionary<string, string>> tagsResponse = await appendBlob.GetTagsAsync();
- Console.WriteLine(appendBlob.Name);
- foreach (KeyValuePair<string, string> tag in tagsResponse.Value)
- {
- Console.WriteLine($"{tag.Key}={tag.Value}");
- }
-
- // List blobs with all options returned including blob index tags
- await foreach (BlobItem blobItem in container.GetBlobsAsync(BlobTraits.All))
- {
- Console.WriteLine(Environment.NewLine + blobItem.Name);
- foreach (KeyValuePair<string, string> tag in blobItem.Tags)
- {
- Console.WriteLine($"{tag.Key}={tag.Value}");
- }
- }
-
- // Delete existing blob index tags by replacing all tags
- Dictionary<string, string> noTags = new Dictionary<string, string>();
- await appendBlob.SetTagsAsync(noTags);
-
- }
- finally
- {
- }
- }
+{
+ var serviceClient = new BlobServiceClient(ConnectionString);
+ var container = serviceClient.GetBlobContainerClient("mycontainer");
+
+ // Create a container
+ await container.CreateIfNotExistsAsync();
+
+ // Create a new append blob
+ AppendBlobClient appendBlob = container.GetAppendBlobClient("myAppendBlob1.logs");
+ await appendBlob.CreateAsync();
+
+ // Set or update blob index tags on existing blob
+ Dictionary<string, string> tags = new Dictionary<string, string>
+ {
+ { "Project", "Contoso" },
+ { "Status", "Unprocessed" },
+ { "Sealed", "true" }
+ };
+ await appendBlob.SetTagsAsync(tags);
+
+ // Get blob index tags
+ Response<IDictionary<string, string>> tagsResponse = await appendBlob.GetTagsAsync();
+ Console.WriteLine(appendBlob.Name);
+ foreach (KeyValuePair<string, string> tag in tagsResponse.Value)
+ {
+ Console.WriteLine($"{tag.Key} = {tag.Value}");
+ }
+
+ // List blobs with all options returned including blob index tags
+ await foreach (BlobItem blobItem in container.GetBlobsAsync(BlobTraits.All))
+ {
+ Console.WriteLine(Environment.NewLine + blobItem.Name);
+ foreach (KeyValuePair<string, string> tag in blobItem.Tags)
+ {
+ Console.WriteLine($"{tag.Key} = {tag.Value}");
+ }
+ }
+
+ // Delete existing blob index tags by replacing all tags
+ var noTags = new Dictionary<string, string>();
+ await appendBlob.SetTagsAsync(noTags);
+}
```
Within the Azure portal, the blob index tags filter automatically applies the `@
```csharp static async Task FindBlobsByTagsExample()
- {
- BlobServiceClient serviceClient = new BlobServiceClient(ConnectionString);
- BlobContainerClient container1 = serviceClient.GetBlobContainerClient("mycontainer");
- BlobContainerClient container2 = serviceClient.GetBlobContainerClient("mycontainer2");
-
- // Blob index queries and selection
- String singleEqualityQuery = @"""Archive"" = 'false'";
- String andQuery = @"""Archive"" = 'false' AND ""Priority"" = '01'";
- String rangeQuery = @"""Date"" >= '2020-04-20' AND ""Date"" <= '2020-04-30'";
- String containerScopedQuery = @"@container = 'mycontainer' AND ""Archive"" = 'false'";
-
- String queryToUse = containerScopedQuery;
-
- try
- {
- // Create a container
- await container1.CreateIfNotExistsAsync();
- await container2.CreateIfNotExistsAsync();
-
- // Create append blobs
- AppendBlobClient appendBlobWithTags0 = container1.GetAppendBlobClient("myAppendBlob00.logs");
- AppendBlobClient appendBlobWithTags1 = container1.GetAppendBlobClient("myAppendBlob01.logs");
- AppendBlobClient appendBlobWithTags2 = container1.GetAppendBlobClient("myAppendBlob02.logs");
- AppendBlobClient appendBlobWithTags3 = container2.GetAppendBlobClient("myAppendBlob03.logs");
- AppendBlobClient appendBlobWithTags4 = container2.GetAppendBlobClient("myAppendBlob04.logs");
- AppendBlobClient appendBlobWithTags5 = container2.GetAppendBlobClient("myAppendBlob05.logs");
-
- // Blob index tags to upload
- CreateAppendBlobOptions appendOptions = new CreateAppendBlobOptions();
- appendOptions.Tags = new Dictionary<string, string>
- {
- { "Archive", "false" },
- { "Priority", "01" },
- { "Date", "2020-04-20" }
- };
-
- CreateAppendBlobOptions appendOptions2 = new CreateAppendBlobOptions();
- appendOptions2.Tags = new Dictionary<string, string>
- {
- { "Archive", "true" },
- { "Priority", "02" },
- { "Date", "2020-04-24" }
- };
-
- // Upload data with tags set on creation
- await appendBlobWithTags0.CreateAsync(appendOptions);
- await appendBlobWithTags1.CreateAsync(appendOptions);
- await appendBlobWithTags2.CreateAsync(appendOptions2);
- await appendBlobWithTags3.CreateAsync(appendOptions);
- await appendBlobWithTags4.CreateAsync(appendOptions2);
- await appendBlobWithTags5.CreateAsync(appendOptions2);
-
- // Find Blobs given a tags query
- Console.WriteLine("Find Blob by Tags query: " + queryToUse + Environment.NewLine);
-
- List<TaggedBlobItem> blobs = new List<TaggedBlobItem>();
- await foreach (TaggedBlobItem taggedBlobItem in serviceClient.FindBlobsByTagsAsync(queryToUse))
- {
- blobs.Add(taggedBlobItem);
- }
-
- foreach (var filteredBlob in blobs)
- {
- Console.WriteLine($"BlobIndex result: ContainerName= {filteredBlob.ContainerName}, " +
- $"BlobName= {filteredBlob.Name}");
- }
-
- }
- finally
- {
- }
- }
+{
+ var serviceClient = new BlobServiceClient(ConnectionString);
+ var container1 = serviceClient.GetBlobContainerClient("mycontainer");
+ var container2 = serviceClient.GetBlobContainerClient("mycontainer2");
+
+ // Blob index queries and selection
+ var singleEqualityQuery = @"""Archive"" = 'false'";
+ var andQuery = @"""Archive"" = 'false' AND ""Priority"" = '01'";
+ var rangeQuery = @"""Date"" >= '2020-04-20' AND ""Date"" <= '2020-04-30'";
+ var containerScopedQuery = @"@container = 'mycontainer' AND ""Archive"" = 'false'";
+
+ var queryToUse = containerScopedQuery;
+
+ // Create a container
+ await container1.CreateIfNotExistsAsync();
+ await container2.CreateIfNotExistsAsync();
+
+ // Create append blobs
+ var appendBlobWithTags0 = container1.GetAppendBlobClient("myAppendBlob00.logs");
+ var appendBlobWithTags1 = container1.GetAppendBlobClient("myAppendBlob01.logs");
+ var appendBlobWithTags2 = container1.GetAppendBlobClient("myAppendBlob02.logs");
+ var appendBlobWithTags3 = container2.GetAppendBlobClient("myAppendBlob03.logs");
+ var appendBlobWithTags4 = container2.GetAppendBlobClient("myAppendBlob04.logs");
+ var appendBlobWithTags5 = container2.GetAppendBlobClient("myAppendBlob05.logs");
+
+ // Blob index tags to upload
+ CreateAppendBlobOptions appendOptions = new CreateAppendBlobOptions();
+ appendOptions.Tags = new Dictionary<string, string>
+ {
+ { "Archive", "false" },
+ { "Priority", "01" },
+ { "Date", "2020-04-20" }
+ };
+
+ CreateAppendBlobOptions appendOptions2 = new CreateAppendBlobOptions();
+ appendOptions2.Tags = new Dictionary<string, string>
+ {
+ { "Archive", "true" },
+ { "Priority", "02" },
+ { "Date", "2020-04-24" }
+ };
+
+ // Upload data with tags set on creation
+ await appendBlobWithTags0.CreateAsync(appendOptions);
+ await appendBlobWithTags1.CreateAsync(appendOptions);
+ await appendBlobWithTags2.CreateAsync(appendOptions2);
+ await appendBlobWithTags3.CreateAsync(appendOptions);
+ await appendBlobWithTags4.CreateAsync(appendOptions2);
+ await appendBlobWithTags5.CreateAsync(appendOptions2);
+
+ // Find Blobs given a tags query
+ Console.WriteLine($"Find Blob by Tags query: {queryToUse}");
+
+ var blobs = new List<TaggedBlobItem>();
+ await foreach (TaggedBlobItem taggedBlobItem in serviceClient.FindBlobsByTagsAsync(queryToUse))
+ {
+ blobs.Add(taggedBlobItem);