Updates from: 03/20/2022 02:11:27
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Troubleshoot With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/troubleshoot-with-application-insights.md
After you save the settings the Application insights logs appear on the **Azure
## Configure Application Insights in Production
-To improve your production environment performance and better user experience, it's important to configure your policy to ignore messages that are unimportant. Use the following configuration in production environments.
+To improve your production environment performance and better user experience, it's important to configure your policy to ignore messages that are unimportant. Use the following configuration in production environments and no logs will be sent to your application insights.
1. Set the `DeploymentMode` attribute of the [TrustFrameworkPolicy](trustframeworkpolicy.md) to `Production`.
To improve your production environment performance and better user experience, i
- Learn how to [troubleshoot Azure AD B2C custom policies](troubleshoot.md)
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
Title: Use additional context in multifactor authentication (MFA) notifications (Preview) - Azure Active Directory
+ Title: Use additional context in Microsoft Authenticator notifications (Preview) - Azure Active Directory
description: Learn how to use additional context in MFA notifications Previously updated : 02/11/2022 Last updated : 03/18/2022 # Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to use additional context in multifactor authentication (MFA) notifications (Preview) - Authentication Methods Policy
+# How to use additional context in Microsoft Authenticator notifications (Preview) - Authentication Methods Policy
This topic covers how to improve the security of user sign-in by adding application location based on IP address in Microsoft Authenticator push notifications.
The additional context can be combined with [number matching](how-to-mfa-number-
### Policy schema changes
+>[!NOTE]
+>In Graph Explorer, ensure you've consented to the **Policy.Read.All** and **Policy.ReadWrite.AuthenticationMethod** permissions.
+ Identify a single target group for the schema configuration. Then use the following API endpoint to change the displayAppInformationRequiredState property to **enabled**: https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMethodConfigurations/MicrosoftAuthenticator
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 02/28/2022 Last updated : 03/18/2022
Number matching is available for the following scenarios. When enabled, all scen
- [NPS extension](howto-mfa-nps-extension.md) >[!NOTE]
->For passwordless users, enabling number matching has no impact because it's already part of the passwordless experience.
+>For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
### Multifactor authentication
You will need to change the **numberMatchingRequiredState** from **default** to
Note that the value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we will use **any**, but if you do not want to allow passwordless, use **push**. >[!NOTE]
->For passwordless users, enabling number matching has no impact because it's already part of the passwordless experience.
+>For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
You might need to patch the entire includeTarget to prevent overwriting any previous configuration. In that case, do a GET first, update only the relevant fields, and then PATCH. The following example only shows the update to the **numberMatchingRequiredState**.
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md
One of these error messages are displayed: "A verified publisher canΓÇÖt be adde
First, verify you've met the [publisher verification requirements](publisher-verification-overview.md#requirements).
+> [!NOTE]
+> If you've met the publisher verification requirements and are still having issues, try using an existing or newly created user with similar permissions.
+ When a request to add a verified publisher is made, many signals are used to make a security risk assessment. If the request is determined to be risky an error will be returned. For security reasons, Microsoft doesn't disclose the specific criteria used to determine whether a request is risky or not. If you received this error and believe the "risky" assessment is incorrect, try waiting and resubmitting the verification request. Some customers have reported success after multiple attempts. ## Next steps
active-directory Hybrid Azuread Join Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-plan.md
The following table provides details on support for these on-premises AD UPNs in
| -- | -- | -- | -- | | Routable | Federated | From 1703 release | Generally available | | Non-routable | Federated | From 1803 release | Generally available |
-| Routable | Managed | From 1803 release | Generally available, Azure AD SSPR on Windows lock screen isn't supported. The on-premises UPN must be synced to the `onPremisesUserPrincipalName` attribute in Azure AD |
+| Routable | Managed | From 1803 release | Generally available, Azure AD SSPR on Windows lock screen isn't supported in environments where the on-premises UPN is different from the Azure AD UPN. The on-premises UPN must be synced to the `onPremisesUserPrincipalName` attribute in Azure AD |
| Non-routable | Managed | Not supported | | ## Next steps
active-directory Authentication Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md
Previously updated : 02/07/2022 Last updated : 03/21/2022
# Authentication and Conditional Access for External Identities
-When an external user accesses resources in your organization, the authentication flow is determined by the user's identity provider (an external Azure AD tenant, social identity provider, etc.), Conditional Access policies, and the [cross-tenant access settings](cross-tenant-access-overview.md) configured both in the user's home tenant and the tenant hosting resources.
+When an external user accesses resources in your organization, the authentication flow is determined by the collaboration method (B2B collaboration or B2B direct connect), user's identity provider (an external Azure AD tenant, social identity provider, etc.), Conditional Access policies, and the [cross-tenant access settings](cross-tenant-access-overview.md) configured both in the user's home tenant and the tenant hosting resources.
This article describes the authentication flow for external users who are accessing resources in your organization. Organizations can enforce multiple Conditional Access policies for their external users, which can be enforced at the tenant, app, or individual user level in the same way that they're enabled for full-time employees and members of the organization. ## Authentication flow for external Azure AD users
-The following diagram illustrates the authentication flow when an Azure AD organization shares resources with users from other Azure AD organizations. This diagram shows how cross-tenant access settings work with Conditional Access policies, such as multi-factor authentication (MFA), to determine if the user can access resources.
+The following diagram illustrates the authentication flow when an Azure AD organization shares resources with users from other Azure AD organizations. This diagram shows how cross-tenant access settings work with Conditional Access policies, such as multi-factor authentication (MFA), to determine if the user can access resources. This flow applies to both B2B collaboration and B2B direct connect, except as noted in step 6.
![Diagram illustrating the cross-tenant authentication process](media/authentication-conditional-access/cross-tenant-auth.png)
The following diagram illustrates the authentication flow when an Azure AD organ
|**3** | Azure AD checks ContosoΓÇÖs inbound trust settings to see if Contoso trusts MFA and device claims (device compliance, hybrid Azure AD joined status) from Fabrikam. If not, skip to step 6. | |**4** | If Contoso trusts MFA and device claims from Fabrikam, Azure AD checks the userΓÇÖs credentials for an indication the user has completed MFA. If Contoso trusts device information from Fabrikam, Azure AD uses the device ID to look up the device object in Fabrikam to determine its state (compliant or hybrid Azure AD joined). | |**5** | If MFA is required but not completed or if a device ID isn't provided, Azure AD issues MFA and device challenges in the user's home tenant as needed. When MFA and device requirements are satisfied in Fabrikam, the user is allowed access to the resource in Contoso. If the checks canΓÇÖt be satisfied, access is blocked. |
-|**6** | When no trust settings are configured and MFA is required, B2B collaboration users are prompted for MFA, which they need to satisfy in the resource tenant. If device compliance is required, access is blocked. |
+|**6** | When no trust settings are configured and MFA is required, B2B collaboration users are prompted for MFA, which they need to satisfy in the resource tenant. Access is blocked for B2B direct connect users. If device compliance is required but can't be evaluated, access is blocked for both B2B collaboration and B2B direct connect users. |
For more information, see the [Conditional Access for external users](#conditional-access-for-external-users) section.
The following diagram illustrates the flow when email one-time passcode authenti
## Conditional Access for external users
-Organizations can enforce Conditional Access policies for external B2B collaboration users in the same way that they're enabled for full-time employees and members of the organization. This section describes important considerations for applying Conditional Access to users outside of your organization.
+Organizations can enforce Conditional Access policies for external B2B collaboration and B2B direct connect users in the same way that they're enabled for full-time employees and members of the organization. This section describes important considerations for applying Conditional Access to users outside of your organization.
### Azure AD cross-tenant trust settings for MFA and device claims
active-directory B2b Direct Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-direct-connect-overview.md
+
+ Title: B2B direct connect overview - Azure AD
+description: Azure Active Directory B2B direct connect lets users from other Azure AD tenants seamlessly sign in to your shared resources via Teams shared channels. There's no need for a guest user object in your Azure AD directory.
+++++ Last updated : 03/21/2022+++++++
+# B2B direct connect overview
+
+Azure Active Directory (Azure AD) B2B direct connect is a feature of External Identities that lets you set up a mutual trust relationship with another Azure AD organization for seamless collaboration. With B2B direct connect, users from both organizations can work together using their home credentials and B2B direct connect-enabled apps, without having to be added to each otherΓÇÖs organizations as guests. Use B2B direct connect to share resources with external Azure AD organizations. Or use it to share resources across multiple Azure AD tenants within your own organization.
+
+![Diagram illustrating B2B direct connect](media/b2b-direct-connect-overview/b2b-direct-connect-overview.png)
+
+B2B direct connect requires a mutual trust relationship between two Azure AD organizations to allow access to each other's resources. Both the resource organization and the external organization need to mutually enable B2B direct connect in their cross-tenant access settings. When the trust is established, the B2B direct connect user has single sign-on access to resources outside their organization using credentials from their home Azure AD organization.
+
+Currently, B2B direct connect capabilities work with Teams Connect shared channels. This means that users in one organization can create a shared channel in Teams and invite an external B2B direct connect user to it. Then from within Teams, the B2B direct connect user can seamlessly access the shared channel in their home tenant Teams instance, without having to manually sign in to the organization hosting the shared channel.
+
+For licensing and pricing information related to B2B direct connect users, refer to [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+
+## Managing cross-tenant access for B2B direct connect
+
+Azure AD organizations can manage their trust relationships with other Azure AD organizations by defining inbound and outbound [cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md). Cross-tenant access settings give you granular control over how other organizations collaborate with you (inbound access) and how your users collaborate with other organizations (outbound access).
+
+- **Inbound access settings** control whether users from external organizations can access resources in your organization. You can apply these settings to everyone, or you can specify individual users, groups, and applications.
+
+- **Outbound access settings** control whether your users can access resources in an external organization. You can apply these settings to everyone, or you can specify individual users, groups, and applications.
+
+- **Tenant restrictions** determine how your users can access an external organization when theyΓÇÖre using your devices and network, but theyΓÇÖre signed in using an account that was issued to them by the external organization.
+
+- **Trust settings** determine whether your Conditional Access policies will trust the multi-factor authentication (MFA), compliant device, and hybrid Azure AD joined device claims from an external organization when their users access your resources.
+
+> [!IMPORTANT]
+> B2B direct connect is possible only when both organizations allow access to and from the other organization. For example, Contoso can allow inbound B2B direct connect from Fabrikam, but sharing isn't possible until Fabrikam also enables outbound B2B direct connect with Contoso. Therefore, youΓÇÖll need to coordinate with the external organizationΓÇÖs admin to make sure their cross-tenant access settings allow sharing with you. This mutual agreement is important because B2B direct connect enables limited sharing of data for the users you enable for B2B direct connect.
+### Default settings
+
+The default cross-tenant access settings apply to all external Azure AD organizations, except organizations for which you've configured individual settings. Initially, Azure AD blocks all inbound and outbound B2B direct connect capabilities by default for all external Azure AD tenants. You can change these default settings, but typically you'll leave them as-is and enable B2B direct connect access with individual organizations.
+
+### Organization-specific settings
+
+You can configure organization-specific settings by adding the organization and modifying the cross-tenant access settings. These settings will then take precedence over the default settings for this organization.
+
+### Example 1: Allow B2B direct connect with Fabrikam and block all others
+
+In this example, Contoso wants to block B2B direct connect with all external organizations by default, but allow B2B direct connect for all users, groups, and apps in Fabrikam.
+
+![Example of blocking B2B direct connect by default but allowing an org](media/b2b-direct-connect-overview/b2b-direct-connect-example.png)
+
+Contoso sets the following **Default settings** for cross-tenant access:
+
+- Block inbound access to B2B direct connect for all external users and groups.
+- Block outbound access to B2B direct connect for all Contoso users and groups.
+
+Then Contoso adds the Fabrikam organization and configures the following **Organizational settings** for Fabrikam:
+
+- Allow inbound access to B2B direct connect for all Fabrikam users and groups.
+- Allow inbound access to all internal Contoso applications by Fabrikam B2B direct connect users.
+- Allow all Contoso users and groups to have outbound access to Fabrikam using B2B direct connect.
+- Allow Contoso B2B direct connect users to have outbound access to all Fabrikam applications.
+
+For this scenario to work, Fabrikam also needs to allow B2B direct connect with Contoso by configuring these same cross-tenant access settings for Contoso and for their own users and applications. Contoso users who manage Teams shared channels in your organizations will be able to add Fabrikam users by searching for their full Fabrikam email addresses.
+
+### Example 2: Enable B2B direct connect with Fabrikam's Marketing group only
+
+Starting from the example above, Contoso could also choose to allow only the Fabrikam Marketing group to collaborate with Contoso's users through B2B direct connect. In this case, Contoso will need to obtain the Marketing group's object ID from Fabrikam. Then, instead of allowing inbound access to all Fabrikam's users, they'll configure their Fabrikam-specific access settings as follows:
+
+- Allow inbound access to B2B direct connect for Fabrikam's Marketing group only. Contoso specifies Fabrikam's Marketing group object ID in the allowed users and groups list.
+- Allow inbound access to all internal Contoso applications by Fabrikam B2B direct connect users.
+- Allow all Contoso users and groups to have outbound access to Fabrikam using B2B direct connect.
+- Allow Contoso B2B direct connect users to have outbound access to all Fabrikam applications.
+
+Fabrikam will also need to configure their outbound cross-tenant access settings so that their Marketing group is allowed to collaborate with Contoso through B2B direct connect. Contoso users who manage Teams shared channels in your organizations will be able to add only Fabrikam Marketing group users by searching for their full Fabrikam email addresses.
+
+## Authentication
+
+In a B2B direct connect scenario, authentication involves a user from an Azure AD organization (the user's home tenant) attempting to sign in to a file or app in another Azure AD organization (the resource tenant). The user signs in with Azure AD credentials from their home tenant. The sign-in attempt is evaluated against cross-tenant access settings in both the user's home tenant and the resource tenant. If all access requirements are met, a token is issued to the user that allows the user to access the resource. This token is valid for 1 hour.
+
+For details about how authentication works in a cross-tenant scenario with Conditional Access policies, see [Authentication and Conditional Access in cross-tenant scenarios](authentication-conditional-access.md).
++
+## Multi-factor authentication (MFA)
+
+If you want to allow B2B direct connect with an external organization and your Conditional Access policies require MFA, you ***must*** configure your inbound [trust settings](cross-tenant-access-settings-b2b-direct-connect.md#to-change-inbound-trust-settings-for-mfa-and-device-state) so that your Conditional Access policies will accept MFA claims from the external organization. This configuration ensures that B2B direct connect users from the external organization are compliant with your Conditional Access policies, and it provides a more seamless user experience.
+
+For example, say Contoso (the resource tenant) trusts MFA claims from Fabrikam. Contoso has a Conditional Access policy requiring MFA. This policy is scoped to all guests, external users, and SharePoint Online. As a prerequisite for B2B direct connect, Contoso must configure trust settings in their cross-tenant access settings to accept MFA claims from Fabrikam. When a Fabrikam user accesses a B2B direct connect-enabled app (for example, a Teams Connect shared channel), the user is subject to the MFA requirement enforced by Contoso:
+
+- If the Fabrikam user has already performed MFA in their home tenant, theyΓÇÖll be able to access the resource within the shared channel.
+- If the Fabrikam user hasnΓÇÖt completed MFA, theyΓÇÖll be blocked from accessing the resource.
+
+For information about Conditional Access and Teams, see [Overview of security and compliance](/microsoftteams/security-compliance-overview) in the Microsoft Teams documentation.
+
+## B2B direct connect user experience
+
+Currently, B2B direct connect enables the Teams Connect shared channels feature. B2B direct connect users can access an external organization's Teams shared channel without having to switch tenants or sign in with a different account. The B2B direct connect userΓÇÖs access is determined by the shared channelΓÇÖs policies.
+
+In the resource organization, the Teams shared channel owner can search within Teams for users from an external organization and add them to the shared channel. After they're added, the B2B direct connect users can access the shared channel from within their home instance of Teams, where they collaborate using features such as chat, calls, file-sharing, and app-sharing. For details, see [Overview of teams and channels in Microsoft Teams](/microsoftteams/teams-channels-overview).For details about the resources, files, and applications, that are available to the B2B direct connect user via the Teams shared channel, refer to [Chat, teams, channels, & apps in Microsoft Teams](/microsoftteams/deploy-chat-teams-channels-microsoft-teams-landing-page).
+
+## B2B direct connect vs. B2B collaboration
+
+B2B collaboration and B2B direct connect are two different approaches to sharing with users outside of your organization. You'll find a [feature-to-feature comparison](external-identities-overview.md#comparing-external-identities-feature-sets) in the External Identities overview. Here, we'll discuss some key differences in how users are managed and how they access resources.
+
+### User access and management
+
+B2B direct connect users collaborate via a mutual connection between two organizations, whereas B2B collaboration users are invited to an organization and managed via a user object.
+
+- B2B direct connect offers way to collaborate with users from another Azure AD organization through a mutual, two-way connection configured by admins from both organizations. Users have single sign-on access to B2B direct connect-enabled Microsoft applications. Currently, B2B direct connect support Teams Connect shared channels.
+
+- B2B collaboration lets you invite external partners to access your Microsoft, SaaS, or custom-developed apps. B2B collaboration is especially useful when the external partner doesn't use Azure AD or it's not practical or possible to set up B2B direct connect. B2B collaboration allows external users to sign in using their preferred identity, including their Azure AD account, consumer Microsoft account, or a social identity you enable such as Google. With B2B collaboration, you can let external users sign in to your Microsoft applications, SaaS apps, custom-developed apps, and so on.
+
+### Using Teams with B2B direct connect vs. B2B collaboration
+
+Within the context of Teams, there are differences in how resources can be shared depending on whether youΓÇÖre collaborating with someone using B2B direct connect or B2B collaboration.
+
+- With B2B direct connect, you add the external user to a shared channel within a team. This user can access the resources within the shared channel, but they donΓÇÖt have access to the entire team or any other resources outside the shared channel. For example, they donΓÇÖt have access to the Azure AD admin portal. They do, however, have access to My apps portal. B2B direct connect users donΓÇÖt have a presence in your Azure AD organization, so these users are managed in the Teams client by the shared channel owner. For details, see the [Assign team owners and members in Microsoft Teams](/microsoftteams/assign-roles-permissions).
+
+- With B2B collaboration, you can invite the guest user to a team. The B2B collaboration guest user signs into the resource tenant using the email address that was used to invite them. Their access is determined by the permissions assigned to guest users in the resource tenant. Guest users canΓÇÖt see or participate in any shared channels in the team.
+
+For more information about differences between B2B collaboration and B2B direct connect in Teams, see [Guest access in Microsoft Teams](/microsoftteams/guest-access).
+
+## Monitoring and auditing
+
+Reporting for monitoring and auditing B2B direct connect activity is available in both the Azure portal and the Microsoft Teams admin center.
+
+### Azure AD monitoring and audit logs
+
+Azure AD includes information about cross-tenant access and B2B direct connect in the organization's Audit logs and Sign-in logs. These logs can be viewed in the Azure portal under **Monitoring**.
+
+- **Azure AD audit logs**: Azure AD Audit logs show when inbound and outbound policies are created, updated, or deleted.
+
+ ![Screenshot showing an audit log](media/b2b-direct-connect-overview/audit-log.png)
+
+- **Azure AD sign-in logs** Azure AD sign-in logs are available in both the home organization and the resource organization. Once B2B direct connect is enabled, sign-in logs will begin including user object IDs for B2B direct connect users from other tenants. The information reported in each organization varies, for example:
+
+ - In both organizations, B2B direct connect sign-ins are labeled with a cross-tenant access type of B2B direct connect. A sign-in event is recorded when a B2B direct connect user first accesses a resource organization, and again when a refresh token is issued for the user. Users can access their own sign-in logs. Admins can view sign-ins for their entire organization to see how B2B direct connect users are accessing resources in their tenant.
+
+ - In the home organization, the logs include client application information.
+
+ - In the resource organization, the logs include conditionalAccessPolicies in the Conditional Access tab.
+
+ [ ![Screenshot showing a sign-in log](media/b2b-direct-connect-overview/sign-in-logs.png) ](media/b2b-direct-connect-overview/sign-in-logs.png#lightbox)
+
+- **Azure AD access reviews**: With Azure Active Directory (Azure AD) access reviews, a tenant admin can ensure that external guest users donΓÇÖt have access to your apps and resources longer than is necessary by configuring a one-time or recurring access review of the external users. [Learn more about access reviews](../governance/access-reviews-overview.md).
+
+### Microsoft Teams monitoring and audit logs
+
+The Microsoft Teams admin center displays reporting for shared channels, including external B2B direct connect members for each team.
+
+- **Teams audit logs**: Teams supports the following auditing events in the tenant that hosts the shared channel: Shared channel lifecycle (Create/Delete channel), In-tenant/Cross-tenant Member lifecycle (Add/remove/Promote/Demote member). These audit logs are available in the resource tenant so that admins can determine who has access to the Teams shared channel. There are no audit logs in the external userΓÇÖs home tenant related to their activity in an external shared channel.
+
+- **Teams access reviews**: Access reviews of Groups that are Teams can now detect B2B direct connect users who are using Teams shared channels. When creating an access review, you can scope the review to all internal users, guest users, and external B2B direct connect users who have been added directly to a shared channel. The reviewer is then presented with users who have direct access to the shared channel.
+
+- **Current limitations**: An access review can detect internal users and external B2B direct connect users, but not other teams, that have been added to a shared channel. To view and remove teams that have been added to a shared channel, the shared channel owner can manage membership from within Teams.
+
+For more information about Microsoft Teams audit logs, see the [Microsoft Teams auditing documentation](/microsoftteams/audit-log-events).
+
+## Privacy and data handling
+
+B2B direct connect lets your users and groups access apps and resources that are hosted by an external organization. To establish a connection, an admin from the external organization must also enable B2B direct connect.
+
+By enabling B2B connect with an external organization, you're allowing the external organizations that you have enabled outbound settings with to access limited contact data about your users. Microsoft shares this data with those organizations to help them send a request to connect with your users. Data collected by external organizations, including limited contact data, is subject to the privacy policies and practices of those organizations.
+
+### Outbound access
+
+When B2B direct connect is enabled with an external organization, users in the external organization will be able to search for your users by full email address. Matching search results will return limited data about your users, including first name and last name. Your users will need to consent to the external organizationΓÇÖs privacy policies before more of their data is shared. We recommend you review the privacy information that will be provided by the organization and presented to your users.
+
+### Inbound access
+
+We strongly recommend you add both your global privacy contact and your organization's privacy statement so your internal employees and external guests can review your policies. Follow the steps to [add your organization's privacy info](../fundamentals/active-directory-properties-area.md).
+
+### Restricting access to users and groups
+
+You might want to consider using cross-tenant access settings to restrict B2B direct connect to specific users and groups within your organization and the external organization.
+
+## Next steps
+
+- [Configure cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md)
+- See the Microsoft Teams documentation for details about [data loss prevention](/microsoft-365/compliance), [retention policies](/microsoftteams/retention-policies), and [eDiscovery](/microsoftteams/ediscovery-investigation).
active-directory B2b Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-fundamentals.md
Previously updated : 02/14/2022 Last updated : 03/21/2022
This article contains recommendations and best practices for business-to-busines
| Recommendation | Comments | | | | | Consult Azure AD guidance for securing your collaboration with external partners | Learn how to take a holistic governance approach to your organization's collaboration with external partners by following the recommendations in [Securing external collaboration in Azure Active Directory and Microsoft 365](../fundamentals/secure-external-access-resources.md). |
-| Carefully plan your cross-tenant access and external collaboration settings | Azure AD gives you a flexible set of controls for managing collaboration with external users and organizations. You can allow or block all collaboration, or configure collaboration only for specific organizations, users, and apps. Before configuring settings for cross-tenant access and external collaboration, take a careful inventory of the organizations you work and partner with. Then determine if you want to enable [B2B collaboration](what-is-b2b.md) with other Azure AD tenants, and how you want to manage [B2B collaboration invitations](external-collaboration-settings-configure.md). |
+| Carefully plan your cross-tenant access and external collaboration settings | Azure AD gives you a flexible set of controls for managing collaboration with external users and organizations. You can allow or block all collaboration, or configure collaboration only for specific organizations, users, and apps. Before configuring settings for cross-tenant access and external collaboration, take a careful inventory of the organizations you work and partner with. Then determine if you want to enable [B2B direct connect](b2b-direct-connect-overview.md) or [B2B collaboration](what-is-b2b.md) with other Azure AD tenants, and how you want to manage [B2B collaboration invitations](external-collaboration-settings-configure.md). |
| For an optimal sign-in experience, federate with identity providers | Whenever possible, federate directly with identity providers to allow invited users to sign in to your shared apps and resources without having to create Microsoft Accounts (MSAs) or Azure AD accounts. You can use the [Google federation feature](google-federation.md) to allow B2B guest users to sign in with their Google accounts. Or, you can use the [SAML/WS-Fed identity provider (preview) feature](direct-federation.md) to set up federation with any organization whose identity provider (IdP) supports the SAML 2.0 or WS-Fed protocol. | | Use the Email one-time passcode feature for B2B guests who canΓÇÖt authenticate by other means | The [Email one-time passcode](one-time-passcode.md) feature authenticates B2B guest users when they can't be authenticated through other means like Azure AD, a Microsoft account (MSA), or Google federation. When the guest user redeems an invitation or accesses a shared resource, they can request a temporary code, which is sent to their email address. Then they enter this code to continue signing in. | | Add company branding to your sign-in page | You can customize your sign-in page so it's more intuitive for your B2B guest users. See how to [add company branding to sign in and Access Panel pages](../fundamentals/customize-branding.md). |
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
Previously updated : 02/23/2022 Last updated : 03/21/2022
> [!NOTE] > Cross-tenant access settings are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Azure AD organizations can use External Identities cross-tenant access settings to manage how they collaborate with other Azure AD organizations through B2B collaboration. [Cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) give you granular control over how external Azure AD organizations collaborate with you (inbound access) and how your users collaborate with external Azure AD organizations (outbound access). These settings also let you trust multi-factor authentication (MFA) and device claims ([compliant claims and hybrid Azure AD joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Azure AD organizations.
+Azure AD organizations can use External Identities cross-tenant access settings to manage how they collaborate with other Azure AD organizations through [B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md) and [B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md). Cross-tenant access settings give you granular control over how external Azure AD organizations collaborate with you (inbound access) and how your users collaborate with external Azure AD organizations (outbound access). These settings also let you trust multi-factor authentication (MFA) and device claims ([compliant claims and hybrid Azure AD joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Azure AD organizations.
-This article describes cross-tenant access settings that are used to manage B2B collaboration with external Azure AD organizations. For B2B collaboration with non-Azure AD identities (for example, social identities or non-IT managed external accounts), use external collaboration settings. External collaboration settings include options for restricting guest user access, specifying who can invite guests, and allowing or blocking domains.
+This article describes cross-tenant access settings, which are used to manage B2B collaboration and B2B direct connect with external Azure AD organizations. Additional settings are available for B2B collaboration with non-Azure AD identities (for example, social identities or non-IT managed external accounts). These [external collaboration settings](external-collaboration-settings-configure.md) include options for restricting guest user access, specifying who can invite guests, and allowing or blocking domains.
![Overview diagram of cross-tenant access settings](media/cross-tenant-access-overview/cross-tenant-access-settings-overview.png) ## Manage external access with inbound and outbound settings
-B2B collaboration is enabled by default, but comprehensive admin settings let you control your B2B collaboration with external partners and organizations
+By default, B2B collaboration with other Azure AD organizations is enabled, and B2B direct connect is blocked. But the following comprehensive admin settings let you manage both of these features.
-- **Outbound access settings** control whether your users can access resources in an external organization. You can apply these settings to everyone, or you can specify individual users, groups, and applications.
+- **Outbound access settings** control whether your users can access resources in an external organization. You can apply these settings to everyone, or specify individual users, groups, and applications.
-- **Inbound access settings** control whether users from external Azure AD organizations can access resources in your organization. You can apply these settings to everyone, or you can specify individual users, groups, and applications.
+- **Inbound access settings** control whether users from external Azure AD organizations can access resources in your organization. You can apply these settings to everyone, or specify individual users, groups, and applications.
- **Trust settings** (inbound) determine whether your Conditional Access policies will trust the multi-factor authentication (MFA), compliant device, and [hybrid Azure AD joined device](../devices/concept-azure-ad-join-hybrid.md) claims from an external organization if their users have already satisfied these requirements in their home tenants. For example, when you configure your trust settings to trust MFA, your MFA policies are still applied to external users, but users who have already completed MFA in their home tenants won't have to complete MFA again in your tenant. ## Default settings
-The default cross-tenant access settings apply to all Azure AD organizations external to your tenant, except those for which you've configured organizational settings. You can change your default settings, but the initial default settings for B2B collaboration are as follows:
+The default cross-tenant access settings apply to all Azure AD organizations external to your tenant, except those for which you've configured organizational settings. You can change your default settings, but the initial default settings for B2B collaboration and B2B direct connect are as follows:
-- All your internal users are enabled for B2B collaboration by default. This means your users can invite external guests to access your resources and they can be invited to external organizations as guests. MFA and device claims from other Azure AD organizations aren't trusted.
+- **B2B collaboration**: All your internal users are enabled for B2B collaboration by default. This means your users can invite external guests to access your resources and they can be invited to external organizations as guests. MFA and device claims from other Azure AD organizations aren't trusted.
-- No organizations are added to your Organizational settings by default. This means all external Azure AD organizations are enabled for B2B collaboration with your organization.
+- **B2B direct connect**: No B2B direct connect trust relationships are established by default. Azure AD blocks all inbound and outbound B2B direct connect capabilities for all external Azure AD tenants.
+
+- **Organizational settings**: No organizations are added to your Organizational settings by default. This means all external Azure AD organizations are enabled for B2B collaboration with your organization.
## Organizational settings You can configure organization-specific settings by adding an organization and modifying the inbound and outbound settings for that organization. Organizational settings take precedence over default settings. -- For B2B collaboration with other Azure AD organizations, you can use cross-tenant access settings to manage inbound and outbound B2B collaboration and scope access to specific users, groups, and applications. You can set a default configuration that applies to all external organizations, and then create individual, organization-specific settings as needed. Using cross-tenant access settings, you can also trust multi-factor (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
+- For B2B collaboration with other Azure AD organizations, use cross-tenant access settings to manage inbound and outbound B2B collaboration and scope access to specific users, groups, and applications. You can set a default configuration that applies to all external organizations, and then create individual, organization-specific settings as needed. Using cross-tenant access settings, you can also trust multi-factor (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
+
+- For B2B direct connect, use organizational settings to set up a mutual trust relationship with another Azure AD organization. Both your organization and the external organization need to mutually enable B2B direct connect by configuring inbound and outbound cross-tenant access settings.
- You can use external collaboration settings to limit who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory.
You can configure organization-specific settings by adding an organization and m
> [!IMPORTANT] > Changing the default inbound or outbound settings to block access could block existing business-critical access to apps in your organization or partner organizations. Be sure to use the tools described in this article and consult with your business stakeholders to identify the required access. -- Cross-tenant access settings are used to manage B2B collaboration with other Azure AD organizations. For non-Azure AD identities (for example, social identities or non-IT managed external accounts), use [external collaboration settings](external-collaboration-settings-configure.md). External collaboration settings include options for restricting guest user access, specifying who can invite guests, and allowing or blocking domains.
+- To configure cross-tenant access settings in the Azure portal, you'll need an account with a Global administrator or Security administrator role.
+
+- To configure trust settings or apply access settings to specific users, groups, or applications, you'll need an Azure AD Premium P1 license.
+
+- Cross-tenant access settings are used to manage B2B collaboration and B2B direct connect with other Azure AD organizations. For B2B collaboration with non-Azure AD identities (for example, social identities or non-IT managed external accounts), use [external collaboration settings](external-collaboration-settings-configure.md). External collaboration settings include B2B collaboration options for restricting guest user access, specifying who can invite guests, and allowing or blocking domains.
- If you want to apply access settings to specific users, groups, or applications in an external organization, you'll need to contact the organization for information before configuring your settings. Obtain their user object IDs, group object IDs, or application IDs (*client app IDs* or *resource app IDs*) so you can target your settings correctly.
You can configure organization-specific settings by adding an organization and m
- The access settings you configure for users and groups must match the access settings for applications. Conflicting settings aren't allowed, and youΓÇÖll see warning messages if you try to configure them.
- - **Example 1**: If you block inbound B2B collaboration for all external users and groups, access to all your applications must also be blocked.
+ - **Example 1**: If you block inbound access for all external users and groups, access to all your applications must also be blocked.
- - **Example 2**: If you allow outbound B2B collaboration for all your users (or specific users or groups), youΓÇÖll be prevented from blocking all access to external applications; access to at least one application must be allowed.
+ - **Example 2**: If you allow outbound access for all your users (or specific users or groups), youΓÇÖll be prevented from blocking all access to external applications; access to at least one application must be allowed.
-- If you block access to all apps by default, users will be unable to read emails encrypted with Microsoft Rights Management Service (also known as Office 365 Message Encryption or OME). To avoid this issue, we recommend configuring your outbound settings to allow your users to access this app ID: 00000012-0000-0000-c000-000000000000. If this is the only application you allow, access to all other apps will be blocked by default.--- To configure cross-tenant access settings in the Azure portal, you'll need an account with a Global administrator or Security administrator role.
+- If you want to allow B2B direct connect with an external organization and your Conditional Access policies require MFA, you must configure your trust settings so that your Conditional Access policies will accept MFA claims from the external organization.
-- To configure trust settings or apply access settings to specific users, groups, or applications, you'll need an Azure AD Premium P1 license.
+- If you block access to all apps by default, users will be unable to read emails encrypted with Microsoft Rights Management Service (also known as Office 365 Message Encryption or OME). To avoid this issue, we recommend configuring your outbound settings to allow your users to access this app ID: 00000012-0000-0000-c000-000000000000. If this is the only application you allow, access to all other apps will be blocked by default.
## Identify inbound and outbound sign-ins
-Several tools are available to help you identify the access your users and partners need before you set inbound and outbound access settings. To ensure you donΓÇÖt remove access that your users and partners need, you can examine current sign-in behavior. Taking this preliminary step will help prevent loss of desired access for your end users and partner users. However, in some cases these logs are only retained for 30 days, so we strongly recommend you speak with your business stakeholders to ensure required access isn't lost.
+Several tools are available to help you identify the access your users and partners need before you set inbound and outbound access settings. To ensure you donΓÇÖt remove access that your users and partners need, you should examine current sign-in behavior. Taking this preliminary step will help prevent loss of desired access for your end users and partner users. However, in some cases these logs are only retained for 30 days, so we strongly recommend you speak with your business stakeholders to ensure required access isn't lost.
### Cross-tenant sign-in activity PowerShell script
-To review user sign-in activity associated with external tenants, you can use the [cross-tenant user sign-in activity](https://aka.ms/cross-tenant-signins-ps) PowerShell script. For example, to view all available sign-in events for inbound activity (external users accessing resources in the local tenant) and outbound activity (local users accessing resources in an external tenant), run the following command:
+To review user sign-in activity associated with external tenants, use the [cross-tenant user sign-in activity](https://aka.ms/cross-tenant-signins-ps) PowerShell script. For example, to view all available sign-in events for inbound activity (external users accessing resources in the local tenant) and outbound activity (local users accessing resources in an external tenant), run the following command:
```powershell Get-MSIDCrossTenantAccessActivity -SummaryStats -ResolveTenantId
The output is a summary of all available sign-in events for inbound and outbound
### Sign-in logs PowerShell script
-To determine your users' access to external Azure AD organizations, you can use the [Get-MgAuditLogSignIn](/powershell/module/microsoft.graph.reports/get-mgauditlogsignin) cmdlet in the Microsoft Graph PowerShell SDK to view data from your sign-in logs for the last 30 days. For example, run the following command:
+To determine your users' access to external Azure AD organizations, use the [Get-MgAuditLogSignIn](/powershell/module/microsoft.graph.reports/get-mgauditlogsignin) cmdlet in the Microsoft Graph PowerShell SDK to view data from your sign-in logs for the last 30 days. For example, run the following command:
```powershell #Initial connection
The output is a list of outbound sign-ins initiated by your users to apps in ext
### Azure Monitor
-If your organization subscribes to the Azure Monitor service, you can use the [Cross-tenant access activity workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md) (available in the Monitoring workbooks gallery in the Azure portal) to visually explore inbound and outbound sign-ins for longer time periods.
+If your organization subscribes to the Azure Monitor service, use the [Cross-tenant access activity workbook](../reports-monitoring/workbook-cross-tenant-access-activity.md) (available in the Monitoring workbooks gallery in the Azure portal) to visually explore inbound and outbound sign-ins for longer time periods.
### Security Information and Event Management (SIEM) Systems
-If your organization exports sign-in logs to a Security Information and Event Management (SIEM) system, you can retrieve required information from your SIEM system.
+If your organization exports sign-in logs to a Security Information and Event Management (SIEM) system, you can retrieve the required information from your SIEM system.
## Identify changes to cross-tenant access settings
The Azure AD audit logs capture all activity around cross-tenant access setting
## Next steps [Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md)
+[Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md)
+
active-directory Cross Tenant Access Settings B2b Collaboration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md
Previously updated : 01/31/2022 Last updated : 03/21/2022
With inbound settings, you select which external users and groups will be able t
1. Under **Access status**, select one of the following:
- - **Allow access**: Allows the users and groups specified under **Target** to be invited for B2B collaboration.
- - **Block access**: Blocks the users and groups specified under **Target** from being invited to B2B collaboration.
+ - **Allow access**: Allows the users and groups specified under **Applies to** to be invited for B2B collaboration.
+ - **Block access**: Blocks the users and groups specified under **Applies to** from being invited to B2B collaboration.
![Screenshot showing selecting the user access status for B2B collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-access.png)
-1. Under **Target**, select one of the following:
+1. Under **Applies to**, select one of the following:
- **All external users and groups**: Applies the action you chose under **Access status** to all users and groups from external Azure AD organizations. - **Select external users and groups** (requires an Azure AD premium subscription): Lets you apply the action you chose under **Access status** to specific users and groups within the external organization.
With inbound settings, you select which external users and groups will be able t
1. Under **Access status**, select one of the following:
- - **Allow access**: Allows the applications specified under **Target** to be accessed by B2B collaboration users.
- - **Block access**: Blocks the applications specified under **Target** from being accessed by B2B collaboration users.
+ - **Allow access**: Allows the applications specified under **Applies to** to be accessed by B2B collaboration users.
+ - **Block access**: Blocks the applications specified under **Applies to** from being accessed by B2B collaboration users.
![Screenshot showing applications access status](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-applications-access.png)
-1. Under **Target**, select one of the following:
+1. Under **Applies to**, select one of the following:
- **All applications**: Applies the action you chose under **Access status** to all of your applications. - **Select applications** (requires an Azure AD premium subscription): Lets you apply the action you chose under **Access status** to specific applications in your organization.
With outbound settings, you select which of your users and groups will be able t
1. Under **Access status**, select one of the following:
- - **Allow access**: Allows your users and groups specified under **Target** to be invited to external organizations for B2B collaboration.
- - **Block access**: Blocks your users and groups specified under **Target** from being invited to B2B collaboration. If you block access for all users and groups, this will also block all external applications from being accessed via B2B collaboration.
+ - **Allow access**: Allows your users and groups specified under **Applies to** to be invited to external organizations for B2B collaboration.
+ - **Block access**: Blocks your users and groups specified under **Applies to** from being invited to B2B collaboration. If you block access for all users and groups, this will also block all external applications from being accessed via B2B collaboration.
![Screenshot showing users and groups access status for b2b collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-external-users-groups-access.png)
-1. Under **Target**, select one of the following:
+1. Under **Applies to**, select one of the following:
- **All \<your organization\> users**: Applies the action you chose under **Access status** to all your users and groups. - **Select \<your organization\> users and groups** (requires an Azure AD premium subscription): Lets you apply the action you chose under **Access status** to specific users and groups.
With outbound settings, you select which of your users and groups will be able t
1. Under **Access status**, select one of the following:
- - **Allow access**: Allows the external applications specified under **Target** to be accessed by your users via B2B collaboration.
- - **Block access**: Blocks the external applications specified under **Target** from being accessed by your users via B2B collaboration.
+ - **Allow access**: Allows the external applications specified under **Applies to** to be accessed by your users via B2B collaboration.
+ - **Block access**: Blocks the external applications specified under **Applies to** from being accessed by your users via B2B collaboration.
![Screenshot showing applications access status for b2b collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-applications-access.png)
-1. Under **Target**, select one of the following:
+1. Under **Applies to**, select one of the following:
- **All external applications**: Applies the action you chose under **Access status** to all external applications. - **Select external applications**: Applies the action you chose under **Access status** to all external applications.
With outbound settings, you select which of your users and groups will be able t
## Next steps
-See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts.
+- See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts.
+- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md)
active-directory Cross Tenant Access Settings B2b Direct Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-direct-connect.md
+
+ Title: Configure B2B direct connect cross-tenant access - Azure AD
+description: Use cross-tenant access settings to manage how you collaborate with other Azure AD organizations. Learn how to configure outbound access to external organizations and inbound access from external Azure AD for B2B direct connect.
++++ Last updated : 03/21/2022++++++++
+# Configure cross-tenant access settings for B2B direct connect
+
+> [!NOTE]
+> Cross-tenant access settings are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Use cross-tenant access settings to manage how you collaborate with other Azure AD organizations through [B2B direct connect](b2b-direct-connect-overview.md). These settings let you determine the level of outbound access your users have to external organizations. They also let you control the level of inbound access that users in external Azure AD organizations will have to your internal resources.
+
+- **Default settings**: The default cross-tenant access settings apply to all external Azure AD organizations, except organizations for which you've configured individual settings. You can change these default settings. For B2B direct connect, you'll typically leave the default settings as-is and enable B2B direct connect access with organization-specific settings. Initially, your default values are as follows:
+
+ - **B2B direct connect initial default settings** - By default, outbound B2B direct connect is blocked for your entire tenant, and inbound B2B direct connect is blocked for all external Azure AD organizations.
+ - **Organizational settings** - No organizations are added by default.
+
+- **Organization-specific settings**: You can configure organization-specific settings by adding an organization and modifying the inbound and outbound settings for that organization. Organizational settings take precedence over default settings.
+
+Learn more about using cross-tenant access settings to [manage B2B direct connect](b2b-direct-connect-overview.md#managing-cross-tenant-access-for-b2b-direct-connect).
+
+## Before you begin
+
+- Review the [Important considerations](cross-tenant-access-overview.md#important-considerations) section in the [cross-tenant access overview](cross-tenant-access-overview.md) before configuring your cross-tenant access settings.
+- Decide on the default level of access you want to apply to all external Azure AD organizations.
+- Identify any Azure AD organizations that will need customized settings.
+- Contact organizations with which you want to set up B2B direct connect. Because B2B direct connect is established through mutual trust, both you and the other organization need to enable B2B direct connect with each other in your cross-tenant access settings.
+- Obtain any required information from external organizations. If you want to apply access settings to specific users, groups, or applications within an external organization, you'll need to obtain these IDs from the organization before you can configure access settings.
+- To configure cross-tenant access settings in the Azure portal, you'll need an account with a Global administrator or Security administrator role. Teams administrators can read cross-tenant access settings, but they can't update these settings.
+
+## Configure default settings
+
+ Default cross-tenant access settings apply to all external tenants for which you haven't created organization-specific customized settings. If you want to modify the Azure AD-provided default settings, follow these steps.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
+1. Select **External Identities**, and then select **Cross-tenant access settings (Preview)**.
+1. Select the **Default settings** tab and review the summary page.
+
+ ![Screenshot showing the Cross-tenant access settings Default settings tab](media/cross-tenant-access-settings-b2b-direct-connect/cross-tenant-defaults.png)
+
+1. To change the settings, select the **Edit inbound defaults** link or the **Edit outbound defaults** link.
+
+ ![Screenshot showing edit buttons for Default settings](media/cross-tenant-access-settings-b2b-direct-connect/cross-tenant-defaults-edit.png)
++
+1. Modify the default settings by following the detailed steps in these sections:
+
+ - [Modify inbound access settings](#modify-inbound-access-settings)
+ - [Modify outbound access settings](#modify-outbound-access-settings)
+
+## Add an organization
+
+Follow these steps to configure customized settings for specific organizations.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
+2. Select **External Identities**, and then select **Cross-tenant access settings (preview)**.
+3. Select **Organizational settings**.
+4. Select **Add organization**.
+5. On the **Add organization** pane, type the full domain name (or tenant ID) for the organization.
+
+ ![Screenshot showing adding an organization](media/cross-tenant-access-settings-b2b-direct-connect/cross-tenant-add-organization.png)
+
+1. Select the organization in the search results, and then select **Add**.
+2. The organization appears in the **Organizational settings** list. At this point, all access settings for this organization are inherited from your default settings. To change the settings for this organization, select the **Inherited from default** link under the **Inbound access** or **Outbound access** column.
+
+ ![Screenshot showing an organization added with default settings](media/cross-tenant-access-settings-b2b-direct-connect/org-specific-settings-inherited.png)
++
+1. Modify the organization's settings by following the detailed steps in these sections:
+
+ - [Modify inbound access settings](#modify-inbound-access-settings)
+ - [Modify outbound access settings](#modify-outbound-access-settings)
+
+## Modify inbound access settings
+
+With inbound settings, you select which external users and groups will be able to access the internal applications you choose. Whether you're configuring default settings or organization-specific settings, the steps for changing inbound cross-tenant access settings are the same. As described in this section, you'll navigate to either the **Default** tab or an organization on the **Organizational settings** tab, and then make your changes.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
+
+1. Select **External Identities**, and then select **Cross-tenant access settings (Preview)**.
+
+1. Navigate to the settings you want to modify:
+ - To modify default inbound settings, select the **Default settings** tab, and then under **Inbound access settings**, select **Edit inbound defaults**.
+ - To modify settings for a specific organization, select the **Organizational settings** tab, find the organization in the list (or [add one](#add-an-organization)), and then select the link in the **Inbound access** column.
+
+1. Follow the detailed steps for the settings you want to change:
+
+ - [To change inbound B2B direct connect settings](#to-change-inbound-b2b-direct-connect-settings)
+ - [To change inbound trust settings for MFA and device state](#to-change-inbound-trust-settings-for-mfa-and-device-state)
+
+### To change inbound B2B direct connect settings
+
+1. Select the **B2B direct connect** tab
+
+1. *If you're configuring settings for an organization,* select one of these options:
+
+ - **Default settings**: The organization will use the settings configured on the **Default** settings tab. If customized settings were already configured for this organization, you'll need to select **Yes** to confirm that you want all settings to be replaced by the default settings. Then select **Save**, and skip the rest of the steps in this procedure.
+
+ - **Customize settings**: You can customize the settings for this organization, which will be enforced for this organization instead of the default settings. Continue with the rest of the steps in this procedure.
+
+1. Select **External users and groups**.
+
+1. Under **Access status**, select one of these options:
+
+ - **Allow access**: Allows the users and groups specified under **Applies to** to access B2B direct connect.
+ - **Block access**: Blocks the users and groups specified under **Applies to** from accessing B2B direct connect. Blocking access for all external users and groups also blocks all your internal applications from being shared via B2B direct connect.
+
+ ![Screenshot showing inbound access status for b2b direct connect users](media/cross-tenant-access-settings-b2b-direct-connect/generic-inbound-external-users-groups-access.png)
+
+1. Under **Applies to**, select one of the following:
+
+ - **All external users and groups**: Applies the action you chose under **Access status** to all users and groups from external Azure AD organizations.
+ - **Select external users and groups** (requires an Azure AD premium subscription): Lets you apply the action you chose under **Access status** to specific users and groups within the external organization.
+
+ ![Screenshot showing selecting the target users for b2b direct connect](media/cross-tenant-access-settings-b2b-direct-connect/generic-inbound-external-users-groups-target.png)
+
+1. If you chose **Select external users and groups**, do the following for each user or group you want to add:
+
+ - Select **Add external users and groups**.
+ - In the **Add other users and groups** pane, type the user object ID or the group object ID in the search box.
+ - In the menu next to the search box, choose either **user** or **group**.
+ - Select **Add**.
+
+ ![Screenshot showing adding external users for inbound b2b direct connect](media/cross-tenant-access-settings-b2b-direct-connect/b2b-direct-connect-inbound-external-users-groups-add.png)
+
+1. When you're done adding users and groups, select **Submit**.
+
+1. Select the **Applications** tab.
+
+1. Under **Access status**, select one of the following:
+
+ - **Allow access**: Allows the applications specified under **Applies to** to be accessed by B2B direct connect users.
+ - **Block access**: Blocks the applications specified under **Applies to** from being accessed by B2B direct connect users.
+
+ ![Screenshot showing inbound applications access status for b2b direct connect](media/cross-tenant-access-settings-b2b-direct-connect/generic-inbound-applications-access.png)
+
+1. Under **Applies to**, select one of the following:
+
+ - **All applications**: Applies the action you chose under **Access status** to all of your applications.
+ - **Select applications** (requires an Azure AD premium subscription): Lets you apply the action you chose under **Access status** to specific applications in your organization.
+
+ ![Screenshot showing application targets for inbound access](media/cross-tenant-access-settings-b2b-direct-connect/generic-inbound-applications-target.png)
+
+1. If you chose **Select applications**, do the following for each application you want to add:
+
+ - Select **Add Microsoft applications**.
+ - In the applications pane, type the application name in the search box and select the application in the search results.
+ - When you're done selecting applications, choose **Select**.
+
+ ![Screenshot showing adding applications for inbound b2b direct connect](media/cross-tenant-access-settings-b2b-direct-connect/inbound-b2b-direct-connect-add-apps.png)
+
+1. Select **Save**.
+
+### To change inbound trust settings for MFA and device state
+
+1. Select the **Trust settings** tab.
+
+1. *If you're configuring settings for an organization*, select one of these options:
+
+ - **Default settings**: The organization will use the settings configured on the **Default** settings tab. If customized settings were already configured for this organization, you'll need to select **Yes** to confirm that you want all settings to be replaced by the default settings. Then select **Save**, and skip the rest of the steps in this procedure.
+
+ - **Customize settings**: You can customize the settings for this organization, which will be enforced for this organization instead of the default settings. Continue with the rest of the steps in this procedure.
+
+1. Select one or more of the following options:
+
+ - **Trust multi-factor authentication from Azure AD tenants**: Select this checkbox if your Conditional Access policies require multi-factor authentication (MFA). This setting allows your Conditional Access policies to trust MFA claims from external organizations. During authentication, Azure AD will check a user's credentials for a claim that the user has completed MFA. If not, an MFA challenge will be initiated in the user's home tenant.
+
+ - **Trust compliant devices**: Allows your Conditional Access policies to trust compliant device claims from an external organization when their users access your resources.
+
+ - **Trust hybrid Azure AD joined devices**: Allows your Conditional Access policies to trust hybrid Azure AD joined device claims from an external organization when their users access your resources.
+
+ ![Screenshot showing inbound trust settings](media/cross-tenant-access-settings-b2b-direct-connect/inbound-trust-settings.png)
+
+1. Select **Save**.
+
+## Modify outbound access settings
+
+With outbound settings, you select which of your users and groups will be able to access the external applications you choose. The detailed steps for modifying outbound cross-tenant access settings are the same whether you're configuring default or organization-specific settings. As described in this section, navigate to the **Default** tab or an organization on the **Organizational settings** tab, and then make your changes.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
+
+1. Select **External Identities** > **Cross-tenant access settings (preview)**.
+
+1. Navigate to the settings you want to modify:
+
+ - To modify default outbound settings, select the **Default settings** tab, and then under **Outbound access settings**, select **Edit outbound defaults**.
+
+ - To modify settings for a specific organization, select the **Organizational settings** tab, find the organization in the list (or [add one](#add-an-organization)) and then select the link in the **Outbound access** column.
+
+### To change the outbound access settings
+
+1. Select the **B2B direct connect** tab.
+
+1. *If you're configuring settings for an organization,* select one of these options:
+
+ - **Default settings**: The organization will use the settings configured on the **Default** settings tab. If customized settings were already configured for this organization, you'll need to select **Yes** to confirm that you want all settings to be replaced by the default settings. Then select **Save**, and skip the rest of the steps in this procedure.
+
+ - **Customize settings**: You can customize the settings for this organization, which will be enforced for this organization instead of the default settings. Continue with the rest of the steps in this procedure.
+
+1. Select **Users and groups**.
+
+1. Under **Access status**, select one of the following:
+
+ - **Allow access**: Allows your users and groups specified under **Applies to** to access B2B direct connect.
+ - **Block access**: Blocks your users and groups specified under **Applies to** from accessing B2B direct connect. Blocking access for all your users and groups also blocks all external applications from being shared via B2B direct connect.
+
+ ![Screenshot showing users and groups access status for outbound b2b direct connect](media/cross-tenant-access-settings-b2b-direct-connect/generic-outbound-external-users-groups-access.png)
++
+1. Under **Applies to**, select one of the following:
+
+ - **All \<your organization\> users**: Applies the action you chose under **Access status** to all your users and groups.
+ - **Select \<your organization\> users and groups** (requires an Azure AD premium subscription): Lets you apply the action you chose under **Access status** to specific users and groups.
+
+ ![Screenshot showing selecting target users for b2b direct connect outbound access](media/cross-tenant-access-settings-b2b-direct-connect/generic-outbound-external-users-groups-target.png)
+
+1. If you chose **Select \<your organization\> users and groups**, do the following for each user or group you want to add:
+
+ - Select **Add \<your organization\> users and groups**.
+ - In the **Select** pane, type the user name or the group name in the search box.
+ - When you're done selecting users and groups, choose **Select**.
+
+1. Select **Save**.
+1. Select the **External applications** tab.
+1. Under **Access status**, select one of the following:
+
+ - **Allow access**: Allows the applications specified under **Applies to** to be accessed by B2B direct connect users.
+ - **Block access**: Blocks the applications specified under **Applies to** from being accessed by B2B direct connect users.
+
+ ![Screenshot showing applications access status for outbound b2b direct connect](media/cross-tenant-access-settings-b2b-direct-connect/generic-outbound-applications-access.png)
+
+1. Under **Applies to**, select one of the following:
+
+ - **All external applications**: Applies the action you chose under **Access status** to all external applications.
+ - **Select applications** (requires an Azure AD premium subscription): Lets you apply the action you chose under **Access status** to specific external applications.
+
+ ![Screenshot showing application targets for outbound b2b direct connect](media/cross-tenant-access-settings-b2b-direct-connect/generic-outbound-applications-target.png)
+
+1. If you chose **Select external applications**, do the following for each application you want to add:
+
+ - Select **Add Microsoft applications** or **Add other applications**.
+ - In the applications pane, type the application name in the search box and select the application in the search results.
+ - When you're done selecting applications, choose **Select**.
+
+ ![Screenshot showing adding external applications for outbound b2b direct connect](media/cross-tenant-access-settings-b2b-direct-connect/outbound-b2b-direct-connect-add-apps.png)
+
+1. Select **Save**.
+
+## Remove an organization
+
+When you remove an organization from your Organizational settings, the default cross-tenant access settings will go into effect for all B2B collaboration with that organization.
+
+> [!NOTE]
+> If the organization is a cloud service provider for your organization (the isServiceProvider property in the Microsoft Graph [partner-specific configuration](/graph/api/resources/crosstenantaccesspolicyconfigurationpartner) is true), you won't be able to remove the organization.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
+
+1. Select **External Identities**, and then select **Cross-tenant access settings (Preview)**.
+
+1. Select the **Organizational settings** tab.
+
+2. Find the organization in the list, and then select the trash can icon on that row.
+
+## Next steps
+
+[Configure cross-tenant access settings for B2B collaboration (Preview)](cross-tenant-access-settings-b2b-collaboration.md)
active-directory External Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-identities-overview.md
Previously updated : 02/23/2022 Last updated : 03/21/2022
With External Identities, external users can "bring their own identities." Wheth
The following capabilities make up External Identities: - **B2B collaboration** - Collaborate with external users by letting them use their preferred identity to sign in to your Microsoft applications or other enterprise applications (SaaS apps, custom-developed apps, etc.). B2B collaboration users are represented in your directory, typically as guest users.
-
+
+- **B2B direct connect** - Establish a mutual, two-way trust with another Azure AD organization for seamless collaboration. B2B direct connect currently supports Teams shared channels, enabling external users to access your resources from within their home instances of Teams. B2B direct connect users aren't represented in your directory, but they're visible from within the Teams shared channel and can be monitored in Teams admin center reports.
+ - **Azure AD B2C** - Publish modern SaaS apps or custom-developed apps (excluding Microsoft apps) to consumers and customers, while using Azure AD B2C for identity and access management. Depending on how you want to interact with external organizations and the types of resources you need to share, you can use a combination of these capabilities.
Depending on how you want to interact with external organizations and the types
## B2B collaboration
-With B2B collaboration, you can invite anyone to sign in to your Azure AD organization using their own credentials so they can access the apps and resources you want to share with them. Use B2B collaboration when you need to let external users access your Office 365 apps, software-as-a-service (SaaS) apps, and line-of-business applications, especially when the partner doesn't use Azure AD. There are no credentials associated with B2B collaboration users. Instead, they authenticate with their home organization or identity provider, and then your organization checks the guest userΓÇÖs eligibility for B2B collaboration.
+With [B2B collaboration](what-is-b2b.md), you can invite anyone to sign in to your Azure AD organization using their own credentials so they can access the apps and resources you want to share with them. Use B2B collaboration when you need to let external users access your Office 365 apps, software-as-a-service (SaaS) apps, and line-of-business applications, especially when the partner doesn't use Azure AD or it's impractical for administrators to set up a mutual connection through B2B direct connect. There are no credentials associated with B2B collaboration users. Instead, they authenticate with their home organization or identity provider, and then your organization checks the guest userΓÇÖs eligibility for B2B collaboration.
There are various ways to add external users to your organization for B2B collaboration:
A user object is created for the B2B collaboration user in the same directory as
You can use [cross-tenant access settings](cross-tenant-access-overview.md) to manage B2B collaboration with other Azure AD organizations. For B2B collaboration with non-Azure AD external users and organizations, use [external collaboration settings](external-collaboration-settings-configure.md).
-Learn more about [B2B collaboration in Azure AD](what-is-b2b.md).
+## B2B direct connect
+
+B2B direct connect is a new way to collaborate with other Azure AD organizations. With B2B direct connect, you create two-way trust relationships with other Azure AD organizations to allow users to seamlessly sign in to your shared resources and vice versa. B2B direct connect users aren't added as guests to your Azure AD directory. When two organizations mutually enable B2B direct connect, users authenticate in their home organization and receive a token from the resource organization for access. Learn more about [B2B direct connect in Azure AD](b2b-direct-connect-overview.md).
+
+Currently, B2B direct connect enables the Teams Connect shared channels feature, which lets your users collaborate with external users from multiple organizations with a Teams shared channel for chat, calls, file-sharing, and app-sharing. Once youΓÇÖve set up B2B direct connect with an external organization, the following Teams shared channels capabilities become available:
+
+- Within Teams, a shared channel owner can search for allowed users from the external organization and add them to the shared channel.
+
+- External users can access the Teams shared channel without having to switch organizations or sign in with a different account. From within Teams, the external user can access files and apps through the Files tab. The userΓÇÖs access is determined by the shared channelΓÇÖs policies.
+
+You use [cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) to manage trust relationships with other Azure AD organizations and define inbound and outbound policies for B2B direct connect.
+
+For details about the resources, files, and applications, that are available to the B2B direct connect user via the Teams shared channel, refer to [Chat, teams, channels, & apps in Microsoft Teams](/microsoftteams/deploy-chat-teams-channels-microsoft-teams-landing-page).
## Azure AD B2C
Although Azure AD B2C is built on the same technology as Azure AD, it's a separa
## Comparing External Identities feature sets
-The following table gives a detailed comparison of the scenarios you can enable with Azure AD External Identities. In the B2B scenarios, an external user is anyone who is not homed in your Azure AD organization.
+The following table gives a detailed comparison of the scenarios you can enable with Azure AD External Identities. In the B2B scenarios, an external user is anyone who isn't homed in your Azure AD organization.
-| | B2B collaboration | Azure AD B2C |
-| - | | |
-| **Primary scenario** | Collaborate with external users by letting them use their preferred identity to sign in to resources in your Azure AD organization. Provides access to Microsoft applications or your own applications (SaaS apps, custom-developed apps, etc.). <br><br> *Example:* Invite an external user to sign in to your Microsoft apps or become a guest member in Teams. | Publish apps to consumers and customers using Azure AD B2C for identity experiences. Provides identity and access management for modern SaaS or custom-developed applications (not first-party Microsoft apps). |
-| **Intended for** | Collaborating with business partners from external organizations like suppliers, partners, vendors. These users may or may not have Azure AD or managed IT. | Customers of your product. These users are managed in a separate Azure AD directory. |
-| **User management** | B2B collaboration users are managed in the same directory as employees but are typically annotated as guest users. Guest users can be managed the same way as employees, added to the same groups, and so on. Cross-tenant access settings can be used to determine which users have access to B2B collaboration. | User objects are created for consumer users in your Azure AD B2C directory. They're managed separately from the organization's employee and partner directory (if any). |
-| **Identity providers supported** | External users can collaborate using work accounts, school accounts, any email address, SAML and WS-Fed based identity providers, and social identity providers like Gmail and Facebook. | Consumer users with local application accounts (any email address, user name, or phone number), Azure AD, various supported social identities, and users with corporate and government-issued identities via SAML/WS-Fed-based identity provider federation. |
-| **Single sign-on (SSO)** | SSO to all Azure AD-connected apps is supported. For example, you can provide access to Microsoft 365 or on-premises apps, and to other SaaS apps such as Salesforce or Workday. | SSO to customer owned apps within the Azure AD B2C tenants is supported. SSO to Microsoft 365 or to other Microsoft SaaS apps isn't supported. |
-| **Licensing and billing** | Based on monthly active users (MAU), including B2B collaboration and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for B2B](external-identities-pricing.md). | Based on monthly active users (MAU), including B2B collaboration and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for Azure AD B2C](../../active-directory-b2c/billing.md). |
-| **Security policy and compliance** | Managed by the host/inviting organization (for example, with [Conditional Access policies](authentication-conditional-access.md) and cross-tenant access settings). | Managed by the organization via Conditional Access and Identity Protection. |
-| **Branding** | Host/inviting organization's brand is used. | Fully customizable branding per application or organization. |
-| **More information** | [Blog post](https://blogs.technet.microsoft.com/enterprisemobility/2017/02/01/azure-ad-b2b-new-updates-make-cross-business-collab-easy/), [Documentation](what-is-b2b.md) | [Product page](https://azure.microsoft.com/services/active-directory-b2c/), [Documentation](../../active-directory-b2c/index.yml) |
+| | B2B collaboration | B2B direct connect | Azure AD B2C |
+| - | | | |
+| **Primary scenario** | Collaborate with external users by letting them use their preferred identity to sign in to resources in your Azure AD organization. Provides access to Microsoft applications or your own applications (SaaS apps, custom-developed apps, etc.). <br><br> *Example:* Invite an external user to sign in to your Microsoft apps or become a guest member in Teams. | Collaborate with users from other Azure AD organizations by establishing a mutual connection. Currently can be used with Teams shared channels, which external users can access from within their home instances of Teams. <br><br> *Example:* Add an external user to a Teams shared channel, which provides a space to chat, call, and share content. | Publish apps to consumers and customers using Azure AD B2C for identity experiences. Provides identity and access management for modern SaaS or custom-developed applications (not first-party Microsoft apps). |
+| **Intended for** | Collaborating with business partners from external organizations like suppliers, partners, vendors. These users may or may not have Azure AD or managed IT. | Collaborating with business partners from external organizations that use Azure AD, like suppliers, partners, vendors. | Customers of your product. These users are managed in a separate Azure AD directory. |
+| **User management** | B2B collaboration users are managed in the same directory as employees but are typically annotated as guest users. Guest users can be managed the same way as employees, added to the same groups, and so on. Cross-tenant access settings can be used to determine which users have access to B2B collaboration. | No user object is created in your Azure AD directory. Cross-tenant access settings determine which users have access to B2B collaboration. direct connect. Shared channel users can be managed in Teams, and usersΓÇÖ access is determined by the Teams shared channelΓÇÖs policies. | User objects are created for consumer users in your Azure AD B2C directory. They're managed separately from the organization's employee and partner directory (if any). |
+| **Identity providers supported** | External users can collaborate using work accounts, school accounts, any email address, SAML and WS-Fed based identity providers, and social identity providers like Gmail and Facebook. | External users collaborate using Azure AD work accounts or school accounts. | Consumer users with local application accounts (any email address, user name, or phone number), Azure AD, various supported social identities, and users with corporate and government-issued identities via SAML/WS-Fed-based identity provider federation. |
+| **Single sign-on (SSO)** | SSO to all Azure AD-connected apps is supported. For example, you can provide access to Microsoft 365 or on-premises apps, and to other SaaS apps such as Salesforce or Workday. | SSO to a Teams shared channel. | SSO to customer owned apps within the Azure AD B2C tenants is supported. SSO to Microsoft 365 or to other Microsoft SaaS apps isn't supported. |
+| **Licensing and billing** | Based on monthly active users (MAU), including B2B collaboration and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for B2B](external-identities-pricing.md). | Based on monthly active users (MAU), including B2B collaboration, B2B direct connect, and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for B2B](external-identities-pricing.md). | Based on monthly active users (MAU), including B2B collaboration and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for Azure AD B2C](../../active-directory-b2c/billing.md). |
+| **Security policy and compliance** | Managed by the host/inviting organization (for example, with [Conditional Access policies](authentication-conditional-access.md) and cross-tenant access settings). | Managed by the host/inviting organization (for example, with [Conditional Access policies](authentication-conditional-access.md) and cross-tenant access settings). See also the [Teams documentation](/microsoftteams/security-compliance-overview). | Managed by the organization via Conditional Access and Identity Protection. |
+| **Branding** | Host/inviting organization's brand is used. | For sign-in screens, the userΓÇÖs home organization brand is used. In the shared channel, the resource organization's brand is used. | Fully customizable branding per application or organization. |
+| **More information** | [Blog post](https://blogs.technet.microsoft.com/enterprisemobility/2017/02/01/azure-ad-b2b-new-updates-make-cross-business-collab-easy/), [Documentation](what-is-b2b.md) | [Documentation](b2b-direct-connect-overview.md) | [Product page](https://azure.microsoft.com/services/active-directory-b2c/), [Documentation](../../active-directory-b2c/index.yml) |
## Managing External Identities features
-Azure AD B2B collaboration is a feature of Azure AD, and it's managed in the Azure portal through the Azure Active Directory service. To control inbound and outbound collaboration with other Azure AD organizations, you can use *cross-tenant access settings*. To control inbound collaboration with other non-Azure AD organizations, you can use *external collaboration settings*.
+Azure AD B2B collaboration and B2B direct connect are features Azure AD, and they're managed in the Azure portal through the Azure Active Directory service. To control inbound and outbound collaboration, you can use a combination of *cross-tenant access settings* and *external collaboration settings*.
### Cross-tenant access settings (Preview)
-Cross-tenant access settings let you manage B2B collaboration with other Azure AD organizations. You can determine how other Azure AD organizations collaborate with you (inbound access), and how your users collaborate with other Azure AD organizations (outbound access). Granular controls let you determine the people, groups, and apps, both in your organization and in external Azure AD organizations, that can participate in B2B collaboration. You can also trust multi-factor authentication (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
+Cross-tenant access settings let you manage B2B collaboration and B2B direct connect with other Azure AD organizations. You can determine how other Azure AD organizations collaborate with you (inbound access), and how your users collaborate with other Azure AD organizations (outbound access). Granular controls let you determine the people, groups, and apps, both in your organization and in external Azure AD organizations, that can participate in B2B collaboration and B2B direct connect. You can also trust multi-factor authentication (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
-- **Default cross-tenant access settings** determine your baseline inbound and outbound settings for B2B collaboration. Initially, your default settings are configured to allow all inbound and outbound B2B collaboration with other Azure AD organizations. You can change these initial settings to create your own default configuration.
+- **Default cross-tenant access settings** determine your baseline inbound and outbound settings for both B2B collaboration and B2B direct connect. Initially, your default settings are configured to allow all inbound and outbound B2B collaboration with other Azure AD organizations and to block B2B direct connect with all Azure AD organizations. You can change these initial settings to create your own default configuration.
-- **Organization-specific access settings** let you configure customized settings for individual Azure AD organizations. Once you add an organization and customize your cross-tenant access settings with this organization, these settings will take precedence over your defaults. For example, you could enable B2B collaboration with all external organizations by default, but disable this feature only for Fabrikam.
+- **Organization-specific access settings** let you configure customized settings for individual Azure AD organizations. Once you add an organization and customize your cross-tenant access settings with this organization, these settings will take precedence over your defaults. For example, you could disable B2B collaboration and B2B direct connect with all external organizations by default, but enable these features only for Fabrikam.
For more information, see [Cross-tenant access in Azure AD External Identities](cross-tenant-access-overview.md).
For details about configuring and managing Azure AD B2C, see the [Azure AD B2C d
## Related Azure AD technologies
-There are several Azure AD technologies that are related to collaboration with external users and organizations. As you design your External Identities collaboration model, consider these additional features.
+There are several Azure AD technologies that are related to collaboration with external users and organizations. As you design your External Identities collaboration model, consider these other features.
### Azure AD entitlement management for B2B guest user sign-up
As an inviting organization, you might not know ahead of time who the individual
Microsoft Graph APIs are available for creating and managing External Identities features. -- **Cross-tenant access settings API**: The [Microsoft Graph cross-tenant access API](/graph/api/resources/crosstenantaccesspolicy-overview?view=graph-rest-beta) lets you programmatically create the same B2B collaboration policies that are configurable in the Azure portal. Using the API, you can set up policies for inbound and outbound collaboration to allow or block features for everyone by default and limit access to specific organizations, groups, users, and applications. The API also allows you to accept MFA and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
+- **Cross-tenant access settings API**: The [Microsoft Graph cross-tenant access API](/graph/api/resources/crosstenantaccesspolicy-overview?view=graph-rest-beta) lets you programmatically create the same B2B collaboration and B2B direct connect policies that are configurable in the Azure portal. Using the API, you can set up policies for inbound and outbound collaboration to allow or block features for everyone by default and limit access to specific organizations, groups, users, and applications. The API also allows you to accept MFA and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
- **B2B collaboration invitation manager**: The [Microsoft Graph invitation manager API](/graph/api/resources/invitation) is available for building your own onboarding experiences for B2B guest users. You can use the [create invitation API](/graph/api/invitation-post?tabs=http) to automatically send a customized invitation email directly to the B2B user, for example. Or your app can use the inviteRedeemUrl returned in the creation response to craft your own invitation (through your communication mechanism of choice) to the invited user. ### Conditional Access
-Organizations can enforce Conditional Access policies for external B2B collaboration users in the same way that they're enabled for full-time employees and members of the organization. For Azure AD cross-tenant scenarios, if your Conditional Access policies require MFA or device compliance, you can now trust MFA and device compliance claims from an external user's home organization. When trust settings are enabled, during authentication, Azure AD will check a user's credentials for an MFA claim or a device ID to determine if the policies have already been met. If so, the external user will be granted seamless sign-on to your shared resource. Otherwise, an MFA or device challenge will be initiated in the user's home tenant. Learn more about the [authentication flow and Conditional Access for external users](authentication-conditional-access.md).
+Organizations can enforce Conditional Access policies for external B2B collaboration and B2B direct connect users in the same way that they're enabled for full-time employees and members of the organization. For Azure AD cross-tenant scenarios, if your Conditional Access policies require MFA or device compliance, you can now trust MFA and device compliance claims from an external user's home organization. When trust settings are enabled, during authentication, Azure AD will check a user's credentials for an MFA claim or a device ID to determine if the policies have already been met. If so, the external user will be granted seamless sign-on to your shared resource. Otherwise, an MFA or device challenge will be initiated in the user's home tenant. Learn more about the [authentication flow and Conditional Access for external users](authentication-conditional-access.md).
### Multitenant applications
If you offer a Software as a Service (SaaS) application to many organizations, y
## Next steps - [What is Azure AD B2B collaboration?](what-is-b2b.md)
+- [What is Azure AD B2B direct connect?](b2b-direct-connect-overview.md)
- [About Azure AD B2C](../../active-directory-b2c/overview.md)
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
Previously updated : 09/10/2021 Last updated : 03/21/2022
-# Leave an organization as a guest user
+# Leave an organization as a B2B collaboration user
-An Azure Active Directory (Azure AD) B2B guest user can decide to leave an organization at any time if they no longer need to use apps from that organization or maintain any association. A user can leave an organization on their own, without having to contact an administrator.
+An Azure Active Directory (Azure AD) B2B collaboration user can decide to leave an organization at any time if they no longer need to use apps from that organization or maintain any association.
+
+- **B2B collaboration users** can usually leave an organization on their own without having to contact an administrator. This option won't be available if it's not allowed by the organization, or if the B2B collaboration user's account has been disabled. The user will need to contact the tenant admin, who can delete the account.
+
+- **B2B direct connect users** don't currently have the option to leave the external organization. If you're a B2B direct connect user at an organization, you can contact your IT admin to submit a Data Subject Request, which is a request to remove the personal data associated with your B2B direct connect user account from the organization.
+
-> [!NOTE]
-> A guest user can't leave an organization if their account is disabled in either the home tenant or the resource tenant. If their account is disabled, the guest user will need to contact the tenant admin, who can either delete the guest account or enable the guest account so the user can leave the organization.
## Leave an organization
To leave an organization, follow these steps.
1. Go to your **My Account** page by doing one of the following: - If you're using a work or school account, go to https://myaccount.microsoft.com and sign in.-- If you're using a personal account, go to https://myapps.microsoft.com and sign in, and then click your account icon in the upper right and select **View account**. Or, use a My Account URL that includes your tenant information to go directly to your My Account page (examples are shown in the following note).
+- If you're using a personal account, go to https://myapps.microsoft.com and sign in, and then select your account icon in the upper right and select **View account**. Or, use a My Account URL that includes your tenant information to go directly to your My Account page (examples are shown in the following note).
> [!NOTE] > If you use the email one-time passcode feature when signing in, you'll need to use a My Account URL that includes your tenant name or tenant ID, for example: `https://myaccount.microsoft.com?tenantId=wingtiptoys.onmicrosoft.com` or `https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789`.
To leave an organization, follow these steps.
## Account removal
-When a user leaves an organization, the user account is "soft deleted" in the directory. By default, the user object moves to the **Deleted users** area in Azure AD but isn't permanently deleted for 30 days. This soft deletion enables the administrator to restore the user account (including groups and permissions), if the user makes a request to restore the account within the 30-day period.
+When a B2B collaboration user leaves an organization, the B2B collaboration user account is "soft deleted" in the directory. By default, the user object moves to the **Deleted users** area in Azure AD but isn't permanently deleted for 30 days. This soft deletion enables the administrator to restore the B2B collaboration user account, including groups and permissions, if the user makes a request to restore the account before it's permanently deleted.
-If desired, a tenant administrator can permanently delete the account at any time during the 30-day period. To do this:
+If desired, a tenant administrator can permanently delete the account at any time during the soft-delete period:
1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**. 2. Under **Manage**, select **Users**. 3. Select **Deleted users**. 4. Select the check box next to a deleted user, and then select **Delete permanently**.
-If you permanently delete a user, this action is irrevocable.
-
+If you permanently delete a B2B collaboration user account, this action is irrevocable.
## Next steps - For an overview of Azure AD B2B, see [What is Azure AD B2B collaboration?](what-is-b2b.md)---
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
Previously updated : 02/14/2022 Last updated : 03/21/2022 tags: active-directory
Here are some remedies for common problems with Azure Active Directory (Azure AD
> - **Starting September 30, 2021**, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If your apps authenticate users with an embedded web-view and you're using Google federation with [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md) or Azure AD B2B for [external user invitations](google-federation.md) or [self-service sign-up](identity-providers.md), Google Gmail users won't be able to authenticate. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support). > - **Starting July 2022**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). +
+## B2B direct connect user is unable to access a shared channel (error AADSTS90071)
+
+When a B2B direct connect sees the following error message when trying to access another organization's Teams shared channel, multi-factor authentication trust settings haven't been configured by the external organization:
+
+> The organization you're trying to reach needs to update their settings to let you sign in.
+>
+> AADSTS90071: An admin from *&lt;organization&gt;* must update their access settings to accept inbound multifactor authentication.
+
+The organization hosting the Teams shared channel must enable the trust setting for multi-factor authentication to allow access to B2B direct connect users. Trust settings are configurable in an organization's [cross-tenant access settings](cross-tenant-access-settings-b2b-direct-connect.md).
+ ## An error similar to "Failure to update policy due to object limit" appears while configuring cross-tenant access settings
-While configuring [cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md), if you receive an error that says ΓÇ£Failure to update policy due to object limitΓÇ¥ you have reached the policy object limit of 25 KB. We're working toward increasing this limit. If you need to be able to calculate how close the current policy is to this limit, do the following:
+As you configure [cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md), if you receive an error that says ΓÇ£Failure to update policy due to object limit,ΓÇ¥ you've reached the policy object limit of 25 KB. We're working toward increasing this limit. If you need to be able to calculate how close the current policy is to this limit, do the following:
1. Open Microsoft Graph Explorer and run the following:
if ($size -le $maxSize) { return ΓÇ£validΓÇ¥ }; else { return ΓÇ£invalidΓÇ¥ }
## Users can no longer read email encrypted with Microsoft Rights Management Service (OME))
-When [configuring cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md), if you block access to all apps by default, users will be unable to read emails encrypted with Microsoft Rights Management Service (also known as OME). To avoid this issue, we recommend configuring your outbound settings to allow your users to access this app ID: 00000012-0000-0000-c000-000000000000. If this is the only application you allow, access to all other apps will be blocked by default.
+As you configure [cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md), if you block access to all apps by default, users will be unable to read emails encrypted with Microsoft Rights Management Service (also known as OME). To avoid this issue, we recommend configuring your outbound settings to allow your users to access this app ID: 00000012-0000-0000-c000-000000000000. If this is the only application you allow, access to all other apps will be blocked by default.
-## IΓÇÖve added an external user but do not see them in my Global Address Book or in the people picker
+## IΓÇÖve added an external user but don't see them in my Global Address Book or in the people picker
-In cases where external users are not populated in the list, the object might take a few minutes to replicate.
+In cases where external users aren't populated in the list, the object might take a few minutes to replicate.
-## A B2B guest user is not showing up in SharePoint Online/OneDrive people picker
+## A B2B guest user isn't showing up in SharePoint Online/OneDrive people picker
The ability to search for existing guest users in the SharePoint Online (SPO) people picker is OFF by default to match legacy behavior.
-You can enable this feature by using the setting 'ShowPeoplePickerSuggestionsForGuestUsers' at the tenant and site collection level. You can set the feature using the Set-SPOTenant and Set-SPOSite cmdlets, which allow members to search all existing guest users in the directory. Changes in the tenant scope do not affect already provisioned SPO sites.
+You can enable this feature by using the setting 'ShowPeoplePickerSuggestionsForGuestUsers' at the tenant and site collection level. You can set the feature using the Set-SPOTenant and Set-SPOSite cmdlets, which allow members to search all existing guest users in the directory. Changes in the tenant scope don't affect already provisioned SPO sites.
## My guest invite settings and domain restrictions aren't being respected by SharePoint Online/OneDrive
-By default, SharePoint Online and OneDrive have their own set of external user options and do not use the settings from Azure AD. You need to enable [SharePoint and OneDrive integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration-preview) to ensure the options are consistent among those applications.
+By default, SharePoint Online and OneDrive have their own set of external user options and don't use the settings from Azure AD. You need to enable [SharePoint and OneDrive integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration-preview) to ensure the options are consistent among those applications.
## Invitations have been disabled for directory
-If you are notified that you do not have permissions to invite users, verify that your user account is authorized to invite external users under Azure Active Directory > User settings > External users > Manage external collaboration settings:
+If you're notified that you don't have permissions to invite users, verify that your user account is authorized to invite external users under Azure Active Directory > User settings > External users > Manage external collaboration settings:
![Screenshot showing the External Users settings](media/troubleshoot/external-user-settings.png)
-If you have recently modified these settings or assigned the Guest Inviter role to a user, there might be a 15-60 minute delay before the changes take effect.
+If you've recently modified these settings or assigned the Guest Inviter role to a user, there might be a 15-60 minute delay before the changes take effect.
## The user that I invited is receiving an error during redemption
Common errors include:
### InviteeΓÇÖs Admin has disallowed EmailVerified Users from being created in their tenant
-When inviting users whose organization is using Azure Active Directory, but where the specific userΓÇÖs account does not exist (for example, the user does not exist in Azure AD contoso.com). The administrator of contoso.com may have a policy in place preventing users from being created. The user must check with their admin to determine if external users are allowed. The external userΓÇÖs admin may need to allow Email Verified users in their domain (see this [article](/powershell/module/msonline/set-msolcompanysettings) on allowing Email Verified Users).
+When inviting users whose organization is using Azure Active Directory, but where the specific userΓÇÖs account doesn't exist (for example, the user doesn't exist in Azure AD contoso.com). The administrator of contoso.com may have a policy in place preventing users from being created. The user must check with their admin to determine if external users are allowed. The external userΓÇÖs admin may need to allow Email Verified users in their domain (see this [article](/powershell/module/msonline/set-msolcompanysettings) on allowing Email Verified Users).
-![Error stating the tenant does not allow email verified users](media/troubleshoot/allow-email-verified-users.png)
+![Error stating the tenant doesn't allow email verified users](media/troubleshoot/allow-email-verified-users.png)
-### External user does not exist already in a federated domain
+### External user doesn't exist already in a federated domain
-If you are using federation authentication and the user does not already exist in Azure Active Directory, the user cannot be invited.
+If you're using federation authentication and the user doesn't already exist in Azure Active Directory, the user can't be invited.
To resolve this issue, the external userΓÇÖs admin must synchronize the userΓÇÖs account to Azure Active Directory. ### External user has a proxyAddress that conflicts with a proxyAddress of an existing local user
-When we check whether a user is able to be invited to your tenant, one of the things we check for is for a collision in the proxyAddress. This includes any proxyAddresses for the user in their home tenant and any proxyAddress for local users in your tenant. For external users, we will add the email to the proxyAddress of the existing B2B user. For local users, you can ask them to sign in using the account they already have.
+When we check whether a user is able to be invited to your tenant, one of the things we check for is for a collision in the proxyAddress. This includes any proxyAddresses for the user in their home tenant and any proxyAddress for local users in your tenant. For external users, we'll add the email to the proxyAddress of the existing B2B user. For local users, you can ask them to sign in using the account they already have.
## I can't invite an email address because of a conflict in proxyAddresses
This happens when another object in the directory has the same invited email add
## The guest user object doesn't have a proxyAddress
-When inviting an external guest user, sometimes this will conflict with an existing [Contact object](/graph/api/resources/contact). When this occurs, the guest user is created without a proxyAddress. This means that the user will not be able to redeem this account using [just-in-time redemption](redemption-experience.md#redemption-through-a-direct-link) or [email one-time passcode authentication](one-time-passcode.md#user-experience-for-one-time-passcode-guest-users).
+Sometimes, the external guest user you're inviting conflicts with an existing [Contact object](/graph/api/resources/contact). When this occurs, the guest user is created without a proxyAddress. This means that the user won't be able to redeem this account using [just-in-time redemption](redemption-experience.md#redemption-through-a-direct-link) or [email one-time passcode authentication](one-time-passcode.md#user-experience-for-one-time-passcode-guest-users).
-## How does ΓÇÿ\#ΓÇÖ, which is not normally a valid character, sync with Azure AD?
+## How does ΓÇÿ\#ΓÇÖ, which isn't normally a valid character, sync with Azure AD?
ΓÇ£\#ΓÇ¥ is a reserved character in UPNs for Azure AD B2B collaboration or external users, because the invited account user@contoso.com becomes user_contoso.com#EXT#@fabrikam.onmicrosoft.com. Therefore, \# in UPNs coming from on-premises aren't allowed to sign in to the Azure portal.
When inviting an external guest user, sometimes this will conflict with an exist
External users can be added only to ΓÇ£assignedΓÇ¥ or ΓÇ£SecurityΓÇ¥ groups and not to groups that are mastered on-premises.
-## My external user did not receive an email to redeem
+## My external user didn't receive an email to redeem
The invitee should check with their ISP or spam filter to ensure that the following address is allowed: Invites@microsoft.com
The invitee should check with their ISP or spam filter to ensure that the follow
> - For the Azure service operated by 21Vianet in China, the sender address is Invites@oe.21vianet.com. > - For the Azure AD Government cloud, the sender address is invites@azuread.us.
-## I notice that the custom message does not get included with invitation messages at times
+## I notice that the custom message doesn't get included with invitation messages at times
-To comply with privacy laws, our APIs do not include custom messages in the email invitation when:
+To comply with privacy laws, our APIs don't include custom messages in the email invitation when:
- The inviter doesnΓÇÖt have an email address in the inviting tenant - When an appservice principal sends the invitation If this scenario is important to you, you can suppress our API invitation email, and send it through the email mechanism of your choice. Consult your organizationΓÇÖs legal counsel to make sure any email you send this way also complies with privacy laws.
-## You receive an ΓÇ£AADSTS65005ΓÇ¥ error when you try to log in to an Azure resource
+## You receive an ΓÇ£AADSTS65005ΓÇ¥ error when you try to sign in to an Azure resource
-A user who has a guest account cannot log on, and is receiving the following error message:
+A user who has a guest account can't sign in, and is receiving the following error message:
```plaintext AADSTS65005: Using application 'AppName' is currently not supported for your organization contoso.com because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of contoso.com before the application AppName can be provisioned.
As of November 18, 2019, guest users in your directory (defined as user accounts
Within the Azure US Government cloud, B2B collaboration is currently only supported between tenants that are both within Azure US Government cloud and that both support B2B collaboration. If you invite a user in a tenant that isn't part of the Azure US Government cloud or that doesn't yet support B2B collaboration, you'll get an error. For details and limitations, see [Azure Active Directory Premium P1 and P2 Variations](../../azure-government/compare-azure-government-global-azure.md#azure-active-directory-premium-p1-and-p2).
-## I receive the error that Azure AD cannot find the aad-extensions-app in my tenant
+## I receive the error that Azure AD can't find the aad-extensions-app in my tenant
-When using self-service sign-up features, like custom user attributes or user flows, an app called `aad-extensions-app. Do not modify. Used by AAD for storing user data.` is automatically created. It's used by Azure AD External Identities to store information about users who sign up and custom attributes collected.
+When you're using self-service sign-up features, like custom user attributes or user flows, an app called `aad-extensions-app. Do not modify. Used by AAD for storing user data.` is automatically created. It's used by Azure AD External Identities to store information about users who sign up and custom attributes collected.
If you accidentally deleted the `aad-extensions-app`, you have 30 days to recover it. You can restore the app using the Azure AD PowerShell module.
If you accidentally deleted the `aad-extensions-app`, you have 30 days to recove
You should now see the restored app in the Azure portal.
-## A guest user was invited successfully but the email attribute is not populating
+## A guest user was invited successfully but the email attribute isn't populating
Let's say you inadvertently invite a guest user with an email address that matches a user object already in your directory. The guest user object is created, but the email address is added to the `otherMail` property instead of to the `mail` or `proxyAddresses` properties. To avoid this issue, you can search for conflicting user objects in your Azure AD directory by using these PowerShell steps:
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 02/18/2022 Last updated : 03/17/2022
Users with this role have access to all administrative features in Azure Active
> | microsoft.dynamics365/allEntities/allTasks | Manage all aspects of Dynamics 365 | > | microsoft.edge/allEntities/allProperties/allTasks | Manage all aspects of Microsoft Edge | > | microsoft.flow/allEntities/allTasks | Manage all aspects of Microsoft Power Automate |
+> | microsoft.insights/allEntities/allProperties/allTasks | Manage all aspects of Insights app |
> | microsoft.intune/allEntities/allTasks | Manage all aspects of Microsoft Intune | > | microsoft.office365.complianceManager/allEntities/allTasks | Manage all aspects of Office 365 Compliance Manager | > | microsoft.office365.desktopAnalytics/allEntities/allTasks | Manage all aspects of Desktop Analytics |
Users with this role have access to all administrative features in Azure Active
> | microsoft.powerApps/allEntities/allTasks | Manage all aspects of Power Apps | > | microsoft.powerApps.powerBI/allEntities/allTasks | Manage all aspects of Power BI | > | microsoft.teams/allEntities/allProperties/allTasks | Manage all resources in Teams |
+> | microsoft.virtualVisits/allEntities/allProperties/allTasks | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app |
> | microsoft.windows.defenderAdvancedThreatProtection/allEntities/allTasks | Manage all aspects of Microsoft Defender for Endpoint | > | microsoft.windows.updatesDeployments/allEntities/allProperties/allTasks | Read and configure all aspects of Windows Update Service |
Users in this role can read settings and administrative information across Micro
> | microsoft.cloudPC/allEntities/allProperties/read | Read all aspects of Windows 365 | > | microsoft.commerce.billing/allEntities/read | Read all resources of Office 365 billing | > | microsoft.edge/allEntities/allProperties/read | Read all aspects of Microsoft Edge |
+> | microsoft.insights/allEntities/allProperties/read | Read all aspects of Viva Insights |
> | microsoft.office365.exchange/allEntities/standard/read | Read all resources of Exchange Online | > | microsoft.office365.messageCenter/messages/read | Read messages in Message Center in the Microsoft 365 admin center, excluding security messages | > | microsoft.office365.messageCenter/securityMessages/read | Read security messages in Message Center in the Microsoft 365 admin center |
Users in this role can read settings and administrative information across Micro
> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | > | microsoft.office365.yammer/allEntities/allProperties/read | Read all aspects of Yammer | > | microsoft.teams/allEntities/allProperties/read | Read all properties of Microsoft Teams |
+> | microsoft.virtualVisits/allEntities/allProperties/read | Read all aspects of Virtual Visits |
> | microsoft.windows.updatesDeployments/allEntities/allProperties/read | Read all aspects of Windows Update Service | ## Groups Administrator
Users in this role can access the full set of administrative capabilities in the
> | | | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
-> | microsoft.insights/allEntities/allTasks | Manage all aspects of Insights app |
+> | microsoft.insights/allEntities/allProperties/allTasks | Manage all aspects of Insights app |
> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Users in this role can access a set of dashboards and insights via the [Microsof
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.insights/reports/read | View reports and dashboard in Insights app |
-> | microsoft.insights/programs/update | Deploy and manage programs in Insights app |
+> | microsoft.insights/reports/allProperties/read | View reports and dashboard in Insights app |
+> | microsoft.insights/programs/allProperties/update | Deploy and manage programs in Insights app |
## Intune Administrator
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
The minimum number of replicas suitable for production is three, preferably comb
By default, a self-hosted gateway is deployed with a **RollingUpdate** deployment [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy). Review the default values and consider explicitly setting the [maxUnavailable](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable) and [maxSurge](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge) fields, especially when you're using a high replica count.
+## Autoscaling
+
+While we provide [guidance on the minimum number of replicas](#number-of-replicas) for the self-hosted gateway, we recommend that you use autoscaling for the self-hosted gateway to meet the demand of your traffic more proactively.
+
+There are two ways to autoscale the self-hosted gateway horizontally:
+
+- Autoscale based on resource usage (CPU and memory)
+- Autoscale based on the number of requests per second
+
+This is possible through native Kubernetes functionality, or by using [Kubernetes Event-driven Autoscaling (KEDA)](https://keda.sh/). KEDA is a CNCF Incubation project that strives to make application autoscaling simple.
+
+> [!NOTE]
+> KEDA is an open-source technology that is not supported by Azure support and needs to be operated by customers.
+
+### Resource-based autoscaling
+
+Kubernetes allows you to autoscale the self-hosted gateway based on resource usage by using a [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). It allows you to [define CPU and memory thresholds](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-resource-metrics), and the number of replicas to scale out or in.
+
+An alternative is to use Kubernetes Event-driven Autoscaling (KEDA) allowing you to scale workloads based on a [variety of scalers](https://keda.sh/docs/latest/scalers/), including CPU and memory.
+
+> [!TIP]
+> If you are already using KEDA to scale other workloads, we recommend using KEDA as a unified app autoscaler. If that is not the case, then we strongly suggest to rely on the native Kubernetes functionality through Horizontal Pod Autoscaler.
+
+### Traffic-based autoscaling
+
+Kubernetes does not provide an out-of-the-box mechanism for traffic-based autoscaling.
+
+Kubernetes Event-driven Autoscaling (KEDA) provides a few ways that can help with traffic-based autoscaling:
+
+- You can scale based on metrics from a Kubernetes ingress if they are available in [Prometheus](https://keda.sh/docs/latest/scalers/prometheus/) or [Azure Monitor](https://keda.sh/docs/latest/scalers/azure-monitor/) by using an out-of-the-box scaler
+- You can install [HTTP add-on](https://github.com/kedacore/http-add-on), which is available in beta, and scales based on the number of requests per second.
+ ## Container resources By default, the YAML file provided in the Azure portal doesn't specify container resource requests.
api-management How To Server Sent Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-server-sent-events.md
Follow these guidelines when using API Management to reach a backend API that im
## Next steps
-* Learn more about [configuring policies](/api-management-howto-policies.md) in API Management.
+* Learn more about [configuring policies](/azure/api-management/api-management-howto-policies) in API Management.
* Learn about API Management [capacity](api-management-capacity.md).
api-management Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md
With a private endpoint and Private Link, you can:
- Limit incoming traffic only to private endpoints, preventing data exfiltration. > [!IMPORTANT]
-> * API Management support for private endpoints is currently in preview.
+> * API Management support for private endpoints is currently in **preview**.
> * To enable private endpoints, the API Management instance can't already be configured with an external or internal [virtual network](virtual-network-concepts.md). > * A private endpoint connection supports only incoming traffic to the API Management instance.
To connect to 'Microsoft.ApiManagement/service/my-apim-service', please use the
* Use [policy expressions](api-management-policy-expressions.md#ref-context-request) with the `context.request` variable to identify traffic from the private endpoint. * Learn more about [private endpoints](../private-link/private-endpoint-overview.md) and [Private Link](../private-link/private-link-overview.md). * Learn more about [managing private endpoint connections](../private-link/manage-private-endpoint.md).
+* [Troubleshoot Azure private endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md).
* Use a [Resource Manager template](https://azure.microsoft.com/resources/templates/api-management-private-endpoint/) to create an API Management instance and a private endpoint with private DNS integration.
application-gateway Application Gateway Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-metrics.md
For Application Gateway, the following metrics are available:
The average number of requests received by each healthy member in a backend pool in a minute. You must specify the backend pool using the *BackendPool HttpSettings* dimension.
+### Web Application Firewall (WAF) metrics
+
+For information on WAF Monitoring, see [WAF v2 Metrics](../../articles/web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v2-metrics)
## Metrics supported by Application Gateway V1 SKU
For Application Gateway, the following metrics are available:
Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination. -- **Web Application Firewall Blocked Requests Count**-- **Web Application Firewall Blocked Requests Distribution**-- **Web Application Firewall Total Rule Distribution** ### Backend metrics
For Application Gateway, the following metrics are available:
The number of backends that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool.
+### Web Application Firewall (WAF) metrics
+
+For information on WAF Monitoring, see [WAF v1 Metrics](../../articles/web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v1-metrics)
+ ## Metrics visualization Browse to an application gateway, under **Monitoring** select **Metrics**. To view the available values, select the **METRIC** drop-down list.
application-gateway Ingress Controller Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-annotations.md
Previously updated : 11/4/2019 Last updated : 3/18/2022
For an Ingress resource to be observed by AGIC, it **must be annotated** with `k
| [appgw.ingress.kubernetes.io/request-timeout](#request-timeout) | `int32` (seconds) | `30` | | | [appgw.ingress.kubernetes.io/use-private-ip](#use-private-ip) | `bool` | `false` | | | [appgw.ingress.kubernetes.io/backend-protocol](#backend-protocol) | `string` | `http` | `http`, `https` |
+| [appgw.ingress.kubernetes.io/rewrite-rule-set](#rewrite-rule-set) | `string` | `nil` | |
## Backend Path Prefix
spec:
backend: serviceName: go-server-service servicePort: 443
-```
+```
+
+## Rewrite Rule Set
+
+This annotation allows you to assign an existing rewrite rule set to the corresponding request routing rule.
+
+### Usage
+
+```yaml
+appgw.ingress.kubernetes.io/rewrite-rule-set: <rewrite rule set name>
+```
+
+### Example
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: go-server-ingress-bkprefix
+ namespace: test-ag
+ annotations:
+ kubernetes.io/ingress.class: azure/application-gateway
+ appgw.ingress.kubernetes.io/rewrite-rule-set: add-custom-response-header
+spec:
+ rules:
+ - http:
+ paths:
+ - path: /
+ pathType: Exact
+ backend:
+ service:
+ name: go-server-service
+ port:
+ number: 8080
+```
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
Application Gateway uses a secret identifier in Key Vault to reference the certi
The Azure portal supports only Key Vault certificates, not secrets. Application Gateway still supports referencing secrets from Key Vault, but only through non-portal resources like PowerShell, the Azure CLI, APIs, and Azure Resource Manager templates (ARM templates).
-> [!WARNING]
-> Azure Application Gateway currently supports only Key Vault accounts in the same subscription as the Application Gateway resource. Choosing a Key Vault under a different subscription than your Application Gateway will result in a failure.
+References to Key Vaults in other Azure subscriptions is supported, but must be configured via ARM Template, Azure PowerShell, CLI, Bicep, etc. Cross-subscription key vault configuration is not supported by Application Gateway via Azure Portal today.
## Certificate settings in Key Vault
$appgw = Get-AzApplicationGateway -Name MyApplicationGateway -ResourceGroupName
Set-AzApplicationGatewayIdentity -ApplicationGateway $appgw -UserAssignedIdentityId "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/MyManagedIdentity" # Get the secret ID from Key Vault $secret = Get-AzKeyVaultSecret -VaultName "MyKeyVault" -Name "CertificateName"
-$secretId = $secret.Id # https://<keyvaultname>.vault.azure.net/secrets/<hash>
+$secretId = $secret.Id.Replace($secret.Version, "") # Remove the secret version so AppGW will use the latest version in future syncs
# Specify the secret ID from Key Vault Add-AzApplicationGatewaySslCertificate -KeyVaultSecretId $secretId -ApplicationGateway $appgw -Name $secret.Name # Commit the changes to the Application Gateway Set-AzApplicationGateway -ApplicationGateway $appgw ```
-> [!NOTE]
-> If you require Application Gateway to sync the last version of the certificate with the key vault, provide the versionless `secretId` value (no hash). To do this, in the preceding example, replace the following line:
->
-> ```
-> $secretId = $secret.Id # https://<keyvaultname>.vault.azure.net/secrets/<hash>
-> ```
->
-> With this line:
->
-> ```
-> $secretId = $secret.Id.Replace($secret.Version, "") # https://<keyvaultname>.vault.azure.net/secrets/
-> ```
- Once the commands have been executed, you can navigate to your Application Gateway in the Azure portal and select the Listeners tab. Click Add Listener (or select an existing) and specify the Protocol to HTTPS. Under **Choose a certificate** select the certificate named in the previous steps. Once selected, select *Add* (if creating) or *Save* (if editing) to apply the referenced Key Vault certificate to the listener.
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Previously updated : 03/09/2022 Last updated : 03/16/2022 recommendations: false
Azure Form Recognizer prebuilt models enable you to add intelligent document pro
| **Model** | **Description** | | | |
-| 🆕[Read (preview)](#read-preview) | Extract text lines, words, their locations, detected languages, and handwritten style if detected. |
+| 🆕[Read (preview)](#read-preview) | Extract printed and handwritten text lines, words, locations, and detected languages.|
| 🆕[W-2 (preview)](#w-2-preview) | Extract employee, employer, wage information, etc. from US W-2 forms. | | 🆕[General document (preview)](#general-document-preview) | Extract text, tables, structure, key-value pairs, and named entities. | | [Layout](#layout) | Extracts text and layout information from documents. |
The Read API analyzes and extracts ext lines, words, their locations, detected l
[:::image type="icon" source="media/studio/w2.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)
-The W-2 model analyzes and extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including both single form and multiple forms (copy A, B, C, D, 1, 2) on one page.
+The W-2 model analyzes and extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including single and multiple forms on one page.
***Sample W-2 document processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
The W-2 model analyzes and extracts key information reported in each box on a W-
The Layout API analyzes and extracts text, tables and headers, selection marks, and structure information from forms and documents.
-***Sample form processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***:
+***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***:
:::image type="content" source="media/studio/analyze-layout.png" alt-text="Screenshot: Screenshot of sample document processed using Form Recognizer studio":::
The invoice model analyzes and extracts key information from sales invoices. The
[:::image type="icon" source="media/studio/receipt.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)
-The receipt model analyzes and extracts key information from sales receipts. The API analyzes printed and handwritten receipts and extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total.
+The receipt model analyzes and extracts key information from printed and handwritten receipts.
***Sample receipt processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
The receipt model analyzes and extracts key information from sales receipts. The
[:::image type="icon" source="media/studio/id-document.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)
-The ID document model analyzes and extracts key information from U.S. Driver's Licenses (all 50 states and District of Columbia) and biographical pages from international passports (excluding visa and other travel documents). The API analyzes identity documents and extracts key information such as first name, last name, address, and date of birth.
+ The ID document model analyzes and extracts key information from the following documents:
+
+* U.S. Driver's Licenses (all 50 states and District of Columbia)
+
+* Biographical pages from international passports (excluding visa and other travel documents). The API analyzes identity documents and extracts
***Sample U.S. Driver's License processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***:
The ID document model analyzes and extracts key information from U.S. Driver's L
[:::image type="icon" source="media/studio/business-card.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)
-The business card model analyzes and extracts key information from business card images. The API analyzes printed business card images and extracts key information such as first name, last name, company name, email address, and phone number.
+The business card model analyzes and extracts key information from business card images.
***Sample business card processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)***:
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
This version of the client library defaults to the 2021-09-30-preview version of
1. Choose the **Include prerelease** checkbox and select version **4.0.0-beta.3*** from the dropdown menu and install the package in your project. <!-- -->+ ## Build your application
-To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your key from the Azure portal and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
+To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your `key` from the Azure portal and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
> [!NOTE] >
To interact with the Form Recognizer service, you'll need to create an instance
> > * Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, *see* Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
+## Run your application
+
+Once you've added a code sample to your application, choose the green **Start** button next to formRecognizer_quickstart to build and run your program, or press **F5**.
+
+ :::image type="content" source="../media/quickstarts/run-visual-studio.png" alt-text="Screenshot: run your Visual Studio program.":::
+
+<!-- ### [.NET Command-line interface (CLI)](#tab/cli)
+
+Open your command prompt and go to the directory that contains your project and type the following:
+
+```console
+dotnet run formrecognizer-quickstart.dll
+```
+
+### [Visual Studio](#tab/vs) -->
+ ## General document model Analyze and extract text, tables, structure, key-value pairs, and named entities.
for (int i = 0; i < result.Tables.Count; i++)
### General document model output
-Visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-general-document-output.md).
-___
+Here's a snippet of the expected output:
+
+```console
+ Detected key-value pairs:
+ Found key with no value: '?'
+ Found key-value pair: 'QUARTERLY REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934' and ':selected:'
+ Found key-value pair: 'For the Quarterly Period Ended March 31, 2020' and 'OR'
+ Found key with no value: '?'
+ Found key-value pair: 'TRANSITION REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934' and ':unselected:'
+ Found key with no value: 'For the Transition Period From'
+ Found key-value pair: 'to Commission File Number' and '001-37845'
+```
+
+To view the entire output, visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-general-document-output.md).
## Layout model
for (int i = 0; i < result.Tables.Count; i++)
### Layout model output
-Visit the Azure samples repository on GitHub to view the [layout model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-layout-output.md).
+Here's a snippet of the expected output:
+
+```console
+ Document Page 1 has 69 line(s), 425 word(s), and 15 selection mark(s).
+ Line 0 has content: 'UNITED STATES'.
+ Its bounding box is:
+ Upper left => X: 3.4915, Y= 0.6828
+ Upper right => X: 5.0116, Y= 0.6828
+ Lower right => X: 5.0116, Y= 0.8265
+ Lower left => X: 3.4915, Y= 0.8265
+ Line 1 has content: 'SECURITIES AND EXCHANGE COMMISSION'.
+ Its bounding box is:
+ Upper left => X: 2.1937, Y= 0.9061
+ Upper right => X: 6.297, Y= 0.9061
+ Lower right => X: 6.297, Y= 1.0498
+ Lower left => X: 2.1937, Y= 1.0498
+```
+
+To view the entire output, visit the Azure samples repository on GitHub to view the [layout model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-layout-output.md).
## Prebuilt model
for (int i = 0; i < result.Documents.Count; i++)
### Prebuilt model output
-Visit the Azure samples repository on GitHub to view the [prebuilt invoice model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-prebuilt-invoice-output.md).
--
-## Run your application
-
-<!-- ### [.NET Command-line interface (CLI)](#tab/cli)
-
-Open your command prompt and go to the directory that contains your project and type the following:
+Here's a snippet of the expected output:
```console
-dotnet run formrecognizer-quickstart.dll
+ Document 0:
+ Vendor Name: 'CONTOSO LTD.', with confidence 0.962
+ Customer Name: 'MICROSOFT CORPORATION', with confidence 0.951
+ Item:
+ Description: 'Test for 23 fields', with confidence 0.899
+ Amount: '100', with confidence 0.902
+ Sub Total: '100', with confidence 0.979
```
-### [Visual Studio](#tab/vs) -->
-
-Choose the green **Start** button next to formRecognizer_quickstart to build and run your program, or press **F5**.
-
- :::image type="content" source="../media/quickstarts/run-visual-studio.png" alt-text="Screenshot: run your Visual Studio program.":::
-
-<!-- -->
+To view the entire output, visit the Azure samples repository on GitHub to view the [prebuilt invoice model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-prebuilt-invoice-output.md).
That's it, congratulations!
applied-ai-services Try V3 Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-java-sdk.md
Last updated 03/16/2022
recommendations: false
-<!-- markdownlint-disable MD025 -->
+<!-- markdownlint-disable MD025 -->
# Get started: Form Recognizer Java SDK v3.0 | Preview
This quickstart uses the Gradle dependency manager. You can find the client libr
} ```
-### Create a Java application
+## Create a Java application
-To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your key from the Azure portal and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
+To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your `key` from the Azure portal and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
1. From the form-recognizer-app directory, run the following command:
To interact with the Form Recognizer service, you'll need to create an instance
> > Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, see* the Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
+## Build and run your application
+
+Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**form-recognizer-app**.
+
+1. Build your application with the `build` command:
+
+ ```console
+ gradle build
+ ```
+
+1. Run your application with the `run` command:
+
+ ```console
+ gradle run
+ ```
+ ## General document model Extract text, tables, structure, key-value pairs, and named entities from documents.
Extract text, tables, structure, key-value pairs, and named entities from docume
### General document model output
-Visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/jav).
+Here's a snippet of the expected output:
+
+```console
+Key content: For the Transition Period From
+Key content bounding region: [com.azure.ai.formrecognizer.models.BoundingRegion@14c053c6]
+Key content: to Commission File Number
+Key content bounding region: [com.azure.ai.formrecognizer.models.BoundingRegion@6c2d4cc6]
+Value content: 001-37845
+Value content bounding region: [com.azure.ai.formrecognizer.models.BoundingRegion@30865a90]
+Key content: (I.R.S. ID)
+Key content bounding region: [com.azure.ai.formrecognizer.models.BoundingRegion@6134ac4a]
+Value content: 91-1144442
+Value content bounding region: [com.azure.ai.formrecognizer.models.BoundingRegion@777c9dc9]
+Key content: Securities registered pursuant to Section 12(g) of the Act:
+Key content bounding region: [com.azure.ai.formrecognizer.models.BoundingRegion@71b1a49c]
+Value content: NONE
+```
+
+To view the entire output, visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/jav).
## Layout model
Extract text, selection marks, text styles, table structures, and bounding regio
.endpoint(endpoint) .buildClient();
- // sample document
+ // sample document
String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
+
String modelId = "prebuilt-layout"; SyncPoller < DocumentOperationResult, AnalyzeResult > analyzeLayoutResultPoller =
Extract text, selection marks, text styles, table structures, and bounding regio
### Layout model output
-Visit the Azure samples repository on GitHub to view the [layout model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/jav).
+Here's a snippet of the expected output:
+
+```console
+ Table 0 has 5 rows and 3 columns.
+ Cell 'Title of each class', has row index 0 and column index 0.
+ Cell 'Trading Symbol', has row index 0 and column index 1.
+ Cell 'Name of exchange on which registered', has row index 0 and column index 2.
+ Cell 'Common stock, $0.00000625 par value per share', has row index 1 and column index 0.
+ Cell 'MSFT', has row index 1 and column index 1.
+ Cell 'NASDAQ', has row index 1 and column index 2.
+ Cell '2.125% Notes due 2021', has row index 2 and column index 0.
+ Cell 'MSFT', has row index 2 and column index 1.
+ Cell 'NASDAQ', has row index 2 and column index 2.
+ Cell '3.125% Notes due 2028', has row index 3 and column index 0.
+ Cell 'MSFT', has row index 3 and column index 1.
+ Cell 'NASDAQ', has row index 3 and column index 2.
+ Cell '2.625% Notes due 2033', has row index 4 and column index 0.
+ Cell 'MSFT', has row index 4 and column index 1.
+ Cell 'NASDAQ', has row index 4 and column index 2.
+```
+
+To view the entire output,visit the Azure samples repository on GitHub to view the [layout model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/jav).
## Prebuilt model
Analyze and extract common fields from specific document types using a prebuilt
> [!TIP] > You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. See [**model data extraction**](../concept-model-overview.md#model-data-extraction).
-#### Try the prebuilt invoice model
- > [!div class="checklist"] > > * Analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
Analyze and extract common fields from specific document types using a prebuilt
private static final String key = "<your-key>"; public static void main(final String[] args) throws IOException {
-
+ // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable DocumentAnalysisClient client = new DocumentAnalysisClientBuilder() .credential(new AzureKeyCredential(key)) .endpoint(endpoint) .buildClient();
-
+ // sample document String invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf"; String modelId = "prebuilt-invoice";
Analyze and extract common fields from specific document types using a prebuilt
AnalyzedDocument analyzedInvoice = analyzeInvoiceResult.getDocuments().get(i); Map < String, DocumentField > invoiceFields = analyzedInvoice.getFields(); System.out.printf("-- Analyzing invoice %d --%n", i);
- System.out.printf("Analyzed document has doc type %s with confidence : %.2f%n.",
+ System.out.printf("Analyzed document has doc type %s with confidence : %.2f%n",
analyzedInvoice.getDocType(), analyzedInvoice.getConfidence()); DocumentField vendorNameField = invoiceFields.get("VendorName");
Analyze and extract common fields from specific document types using a prebuilt
### Prebuilt model output
-Visit the Azure samples repository on GitHub to view the [prebuilt invoice model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/jav)
-
-## Build and run your application
-
-Navigate back to your main project directoryΓÇö**form-recognizer-app**.
-
-1. Build your application with the `build` command:
+Here's a snippet of the expected output:
```console
-gradle build
+ -- Analyzing invoice 0 --
+ Analyzed document has doc type invoice with confidence : 1.00
+ Vendor Name: CONTOSO LTD., confidence: 0.92
+ Vendor address: 123 456th St New York, NY, 10001, confidence: 0.91
+ Customer Name: MICROSOFT CORPORATION, confidence: 0.84
+ Customer Address Recipient: Microsoft Corp, confidence: 0.92
+ Invoice ID: INV-100, confidence: 0.97
+ Invoice Date: 2019-11-15, confidence: 0.97
```
-1. Run your application with the `run` command:
-
-```console
-gradle run
-```
+To view the entire output, visit the Azure samples repository on GitHub to view the [prebuilt invoice model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/jav)
That's it, congratulations!
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
Previously updated : 03/08/2022 Last updated : 03/16/2022 recommendations: false
>[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
-[Reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0-beta.3/https://docsupdatetracker.net/index.html) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/@azure/ai-form-recognizer_4.0.0-beta.3/sdk/formrecognizer/ai-form-recognizer/) | [Package (npm)](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.3) | [Samples](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-bet)
+[Reference documentation](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/@azure/ai-form-recognizer_4.0.0-beta.3/sdk/formrecognizer/ai-form-recognizer/) | [Package (npm)](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.3) | [Samples](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-bet)
Get started with Azure Form Recognizer using the JavaScript programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
In this quickstart you'll use following features to analyze and extract data and
1. Specify your project's attributes using the prompts presented in the terminal. * The most important attributes are name, version number, and entry point.
- * We recommend keeping `index.js` for the entry point name. Description, test command, GitHub repository, keywords, author, and license information are optional attributes that you can choose to skip for this project.
+ * We recommend keeping `index.js` for the entry point name. The description, test command, GitHub repository, keywords, author, and license information are optional attributesΓÇöthey can be skipped for this project.
* Accept the suggestions in parentheses by selecting **Return** or **Enter**.
- * After completing the prompts, a `package.json` file will be created in your form-recognizer-app directory.
+ * After you've completed the prompts, a `package.json` file will be created in your form-recognizer-app directory.
1. Install the `ai-form-recognizer` client library and `azure/identity` npm packages: ```console
- npm install @azure/ai-form-recognizer@4.0.0-beta.2 @azure/identity
+ npm install @azure/ai-form-recognizer@4.0.0-beta.3 @azure/identity
``` * Your app's `package.json` file will be updated with the dependencies.
In this quickstart you'll use following features to analyze and extract data and
> * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder. > * Type the following command **New-Item index.js**.
-1. Open the `index.js` file in Visual Studio Code or your favorite IDE. First, we'll add the necessary libraries. Copy the following and paste it at the top of the file:
+## Build your application
- ```javascript
- const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
- ```
-
-1. Next, we'll create variables for your Azure Form Recognizer resource endpoint and API key. Copy the following and paste it below the library declaration:
+To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your `key` from the Azure portal and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
- ```javascript
- const endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE";
- const apiKey = "PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE";
- ```
+1. Open the `index.js` file in Visual Studio Code or your favorite IDE and select one of the following code samples to copy and paste into your application:
-At this point, your JavaScript application should contain the following lines of code:
+ * [**General document**](#general-document-model)
-```javascript
-const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+ * [**Layout**](#layout-model)
-const endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE";
-const apiKey = "PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE";
-```
+ * [**Prebuilt Invoice**](#prebuilt-model)
-> [!TIP]
-> If you would like to try more than one code sample:
+> [!IMPORTANT]
>
-> * Select one of the sample code blocks below to copy and paste into your application.
-> * [**Run your application**](#run-your-application).
-> * Comment out that sample code block but keep the set-up code and library directives.
-> * Select another sample code block to copy and paste into your application.
-> * [**Build and run your application**](#run-your-application).
-> * You can continue to comment out, copy/paste, and run the sample blocks of code.
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, see* the Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
-### Select a code sample to copy and paste into your application:
+## Run your application
-* [**General document**](#general-document-model)
+Once you've added a code sample to your application, build and run your application:
-* [**Layout**](#layout-model)
+1. Navigate to the folder where you have your form recognizer application (form-recognizer-app).
-* [**Prebuilt Invoice**](#prebuilt-model)
+1. Type the following command in your terminal:
-> [!IMPORTANT]
->
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. See our Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md) article for more information.
+ ```console
+ node index.js
+ ```
## General document model
Extract text, tables, structure, key-value pairs, and named entities from docume
> * We've added the file URL value to the `formUrl` variable near the top of the file. > * To see the list of all supported fields and corresponding types, see our [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
-#### Add the following code to your general document application on the line below the `apiKey` variable
+**Add the following code sample to the `index.js` file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
```javascript
-async function main() {
- const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
-
- const poller = await client.beginAnalyzeDocuments("prebuilt-document", formUrl);
-
- const {
- keyValuePairs,
- entities
- } = await poller.pollUntilDone();
-
- if (keyValuePairs.length <= 0) {
- console.log("No key-value pairs were extracted from the document.");
- } else {
- console.log("Key-Value Pairs:");
- for (const {
- key,
- value,
- confidence
- } of keyValuePairs) {
- console.log("- Key :", `"${key.content}"`);
- console.log(" Value:", `"${value?.content ?? "<undefined>"}" (${confidence})`);
- }
- }
-
- if (entities.length <= 0) {
- console.log("No entities were extracted from the document.");
- } else {
- console.log("Entities:");
- for (const entity of entities) {
- console.log(
- `- "${entity.content}" ${entity.category} - ${entity.subCategory ?? "<none>"} (${
- entity.confidence
- })`
- );
- }
- }
-}
-
-main().catch((error) => {
- console.error("An error occurred:", error);
- process.exit(1);
-});
+ const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+
+ // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
+ const key = "<your-endpoint>";
+ const endpoint = "<your-key>";
+
+ // sample document
+ const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
+
+ async function main() {
+ // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
+
+ const poller = await client.beginAnalyzeDocuments("prebuilt-document", formUrl);
+
+ const {
+ keyValuePairs,
+ entities
+ } = await poller.pollUntilDone();
+
+ if (keyValuePairs.length <= 0) {
+ console.log("No key-value pairs were extracted from the document.");
+ } else {
+ console.log("Key-Value Pairs:");
+ for (const {
+ key,
+ value,
+ confidence
+ } of keyValuePairs) {
+ console.log("- Key :", `"${key.content}"`);
+ console.log(" Value:", `"${value?.content ?? "<undefined>"}" (${confidence})`);
+ }
+ }
+
+ if (entities.length <= 0) {
+ console.log("No entities were extracted from the document.");
+ } else {
+ console.log("Entities:");
+ for (const entity of entities) {
+ console.log(
+ `- "${entity.content}" ${entity.category} - ${entity.subCategory ?? "<none>"} (${
+ entity.confidence
+ })`
+ );
+ }
+ }
+ }
+
+ main().catch((error) => {
+ console.error("An error occurred:", error);
+ process.exit(1);
+ });
+```
+
+### General document model output
+
+Here's a snippet of the expected output:
+
+```console
+Key-Value Pairs:
+- Key : "For the Quarterly Period Ended"
+ Value: "March 31, 2020" (0.35)
+- Key : "From"
+ Value: "1934" (0.119)
+- Key : "to"
+ Value: "<undefined>" (0.317)
+- Key : "Commission File Number"
+ Value: "001-37845" (0.87)
+- Key : "(I.R.S. ID)"
+ Value: "91-1144442" (0.87)
+- Key : "Class"
+ Value: "Common Stock, $0.00000625 par value per share" (0.748)
+- Key : "Outstanding as of April 24, 2020"
+ Value: "7,583,440,247 shares" (0.838)
+Entities:
+- "$0.00000625" Quantity - Currency (0.8)
+- "MSFT" Organization - <none> (0.99)
+- "NASDAQ" Organization - StockExchange (0.99)
+- "2.125%" Quantity - Percentage (0.8)
+- "2021" DateTime - DateRange (0.8)
```
+To view the entire output, visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/FormRecognizer/v3-javascript-sdk-general-document-output.md)
+ ## Layout model Extract text, selection marks, text styles, table structures, and bounding region coordinates from documents.
Extract text, selection marks, text styles, table structures, and bounding regio
> * We've added the file URL value to the `formUrl` variable near the top of the file. > * To analyze a given file from a URL, you'll use the `beginAnalyzeDocuments` method and pass in `prebuilt-layout` as the model Id.
-#### Add the following code to your layout application on the line below the `apiKey` variable
+**Add the following code sample to the `index.js` file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
```javascript
-const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
-
-async function main() {
- const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
-
- const poller = await client.beginAnalyzeDocuments("prebuilt-layout", formUrl);
-
- const {
- pages,
- tables
- } = await poller.pollUntilDone();
-
- if (pages.length <= 0) {
- console.log("No pages were extracted from the document.");
- } else {
- console.log("Pages:");
- for (const page of pages) {
- console.log("- Page", page.pageNumber, `(unit: ${page.unit})`);
- console.log(` ${page.width}x${page.height}, angle: ${page.angle}`);
- console.log(` ${page.lines.length} lines, ${page.words.length} words`);
- }
- }
-
- if (tables.length <= 0) {
- console.log("No tables were extracted from the document.");
- } else {
- console.log("Tables:");
- for (const table of tables) {
- console.log(
- `- Extracted table: ${table.columnCount} columns, ${table.rowCount} rows (${table.cells.length} cells)`
- );
- }
- }
-}
-
-main().catch((error) => {
- console.error("An error occurred:", error);
- process.exit(1);
-});
+ const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+
+ // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
+ const key = "<your-endpoint>";
+ const endpoint = "<your-key>";
+
+ // sample document
+ const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
+
+ async function main() {
+ const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
+
+ const poller = await client.beginAnalyzeDocuments("prebuilt-layout", formUrl);
+
+ const {
+ pages,
+ tables
+ } = await poller.pollUntilDone();
+
+ if (pages.length <= 0) {
+ console.log("No pages were extracted from the document.");
+ } else {
+ console.log("Pages:");
+ for (const page of pages) {
+ console.log("- Page", page.pageNumber, `(unit: ${page.unit})`);
+ console.log(` ${page.width}x${page.height}, angle: ${page.angle}`);
+ console.log(` ${page.lines.length} lines, ${page.words.length} words`);
+ }
+ }
+
+ if (tables.length <= 0) {
+ console.log("No tables were extracted from the document.");
+ } else {
+ console.log("Tables:");
+ for (const table of tables) {
+ console.log(
+ `- Extracted table: ${table.columnCount} columns, ${table.rowCount} rows (${table.cells.length} cells)`
+ );
+ }
+ }
+ }
+
+ main().catch((error) => {
+ console.error("An error occurred:", error);
+ process.exit(1);
+ });
+
+```
+
+### Layout model output
+
+Here's a snippet of the expected output:
+```console
+Pages:
+- Page 1 (unit: inch)
+ 8.5x11, angle: 0
+ 69 lines, 425 words
+Tables:
+- Extracted table: 3 columns, 5 rows (15 cells)
```
+To view the entire output, visit the Azure samples repository on GitHub to view the [layout model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/FormRecognizer/v3-javascript-sdk-layout-output.md)
+ ## Prebuilt model In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
> [!TIP] > You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. See [**model data extraction**](../concept-model-overview.md#model-data-extraction).
-#### Try the prebuilt invoice model
- > [!div class="checklist"] > > * Analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
```javascript
-const { PrebuiltModels } = require("@azure/ai-form-recognizer");
-// Using the PrebuiltModels object, rather than the raw model ID, adds strong typing to the model's output.
-
-const invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf";
-
-async function main() {
-
- const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
-
- const poller = await client.beginAnalyzeDocuments(PrebuiltModels.Invoice, invoiceUrl);
-
- const {
- documents: [result]
- } = await poller.pollUntilDone();
-
- if (result) {
- const invoice = result.fields;
-
- console.log("Vendor Name:", invoice.vendorName?.value);
- console.log("Customer Name:", invoice.customerName?.value);
- console.log("Invoice Date:", invoice.invoiceDate?.value);
- console.log("Due Date:", invoice.dueDate?.value);
-
- console.log("Items:");
- for (const {
- properties: item
- } of invoice.items?.values ?? []) {
- console.log("-", item.productCode?.value ?? "<no product code>");
- console.log(" Description:", item.description?.value);
- console.log(" Quantity:", item.quantity?.value);
- console.log(" Date:", item.date?.value);
- console.log(" Unit:", item.unit?.value);
- console.log(" Unit Price:", item.unitPrice?.value);
- console.log(" Tax:", item.tax?.value);
- console.log(" Amount:", item.amount?.value);
- }
-
- console.log("Subtotal:", invoice.subTotal?.value);
- console.log("Previous Unpaid Balance:", invoice.previousUnpaidBalance?.value);
- console.log("Tax:", invoice.totalTax?.value);
- console.log("Amount Due:", invoice.amountDue?.value);
- } else {
- throw new Error("Expected at least one receipt in the result.");
- }
-}
-
-main().catch((error) => {
- console.error("An error occurred:", error);
- process.exit(1);
-});
+ // Using the PrebuiltModels object, rather than the raw model ID, adds strong typing to the model's output.
+ const { PrebuiltModels } = require("@azure/ai-form-recognizer");
+
+ // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
+ const key = "<your-endpoint>";
+ const endpoint = "<your-key>";
+
+ // sample document
+ const invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf";
+
+ async function main() {
+
+ const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
+
+ const poller = await client.beginAnalyzeDocuments(PrebuiltModels.Invoice, invoiceUrl);
+
+ const {
+ documents: [result]
+ } = await poller.pollUntilDone();
+
+ if (result) {
+ const invoice = result.fields;
+
+ console.log("Vendor Name:", invoice.vendorName?.value);
+ console.log("Customer Name:", invoice.customerName?.value);
+ console.log("Invoice Date:", invoice.invoiceDate?.value);
+ console.log("Due Date:", invoice.dueDate?.value);
+
+ console.log("Items:");
+ for (const {
+ properties: item
+ } of invoice.items?.values ?? []) {
+ console.log("-", item.productCode?.value ?? "<no product code>");
+ console.log(" Description:", item.description?.value);
+ console.log(" Quantity:", item.quantity?.value);
+ console.log(" Date:", item.date?.value);
+ console.log(" Unit:", item.unit?.value);
+ console.log(" Unit Price:", item.unitPrice?.value);
+ console.log(" Tax:", item.tax?.value);
+ console.log(" Amount:", item.amount?.value);
+ }
+
+ console.log("Subtotal:", invoice.subTotal?.value);
+ console.log("Previous Unpaid Balance:", invoice.previousUnpaidBalance?.value);
+ console.log("Tax:", invoice.totalTax?.value);
+ console.log("Amount Due:", invoice.amountDue?.value);
+ } else {
+ throw new Error("Expected at least one receipt in the result.");
+ }
+ }
+
+ main().catch((error) => {
+ console.error("An error occurred:", error);
+ process.exit(1);
+ });
```
-## Run your application
-
-1. Navigate to the folder where you have your form recognizer application (form-recognizer-app).
+### Prebuilt model output
-1. Type the following command in your terminal:
+Here's a snippet of the expected output:
```console
-node index.js
+ Vendor Name: CONTOSO LTD.
+ Customer Name: MICROSOFT CORPORATION
+ Invoice Date: 2019-11-15T00:00:00.000Z
+ Due Date: 2019-12-15T00:00:00.000Z
+ Items:
+ - <no product code>
+ Description: Test for 23 fields
+ Quantity: 1
+ Date: undefined
+ Unit: undefined
+ Unit Price: 1
+ Tax: undefined
+ Amount: 100
```
+To view the entire output, visit the Azure samples repository on GitHub to view the [prebuilt invoice model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/javascript/FormRecognizer/v3-javascript-sdk-prebuilt-invoice-output.md)
+ That's it, congratulations! In this quickstart, you used the Form Recognizer JavaScript SDK to analyze various forms in different ways. Next, explore the reference documentation to learn moe about Form Recognizer v3.0 API.
applied-ai-services Try V3 Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-python-sdk.md
pip install azure-ai-formrecognizer==3.2.0b3
```
-### Create a new Python application
+## Create your Python application
-To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your key from the Azure portal and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
+To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your `key` from the Azure portal and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
1. Create a new Python file called **form_recognizer_quickstart.py** in your preferred editor or IDE.
To interact with the Form Recognizer service, you'll need to create an instance
> > Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, *see* Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
+## Run your application
+
+Once you've added a code sample to your application, build and run your application:
+
+1. Navigate to the folder where you have your **form_recognizer_quickstart.py** file.
+
+1. Type the following command in your terminal:
+
+ ```console
+ python form_recognizer_quickstart.py
+ ```
+ ## General document model Extract text, tables, structure, key-value pairs, and named entities from documents.
if __name__ == "__main__":
### General document model output
-Visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/v3-python-sdk-general-document-output.md)
+Here's a snippet of the expected output:
-___
+```console
+ -Key-value pairs found in document-
+ Key 'ΓÿÆ' found within 'Page #1: [0.6694, 1.7746], [0.7764, 1.7746], [0.7764, 1.8833], [0.6694, 1.8833]' bounding regions
+ Key 'QUARTERLY REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934' found within 'Page #1: [0.996, 1.7804], [7.8449, 1.7804], [7.8449, 2.0559], [0.996, 2.0559]' bounding regions
+ Value ':selected:' found within 'Page #1: [0.6694, 1.7746], [0.7764, 1.7746], [0.7764, 1.8833], [0.6694, 1.8833]' bounding regions
+
+ Key 'For the Quarterly Period Ended March 31, 2020' found within 'Page #1: [0.9982, 2.1626], [3.4543, 2.1626], [3.4543, 2.2665], [0.9982, 2.2665]' bounding regions
+ Value 'OR' found within 'Page #1: [4.1471, 2.2972], [4.3587, 2.2972], [4.3587, 2.4049], [4.1471, 2.4049]' bounding regions
+```
+
+To view the entire output, visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/v3-python-sdk-general-document-output.md)
## Layout model
if __name__ == "__main__":
### Layout model output
-Visit the Azure samples repository on GitHub to view the [layout model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/v3-python-sdk-layout-output.md)
+Here's a snippet of the expected output:
+
+```console
+ -Analyzing layout from page #1-
+ Page has width: 8.5 and height: 11.0, measured with unit: inch
+ ...Line # 0 has word count 2 and text 'UNITED STATES' within bounding box '[3.4915, 0.6828], [5.0116, 0.6828], [5.0116, 0.8265], [3.4915, 0.8265]'
+ ......Word 'UNITED' has a confidence of 1.0
+ ......Word 'STATES' has a confidence of 1.0
+ ...Line # 1 has word count 4 and text 'SECURITIES AND EXCHANGE COMMISSION' within bounding box '[2.1937, 0.9061], [6.297, 0.9061], [6.297, 1.0498], [2.1937, 1.0498]'
+ ......Word 'SECURITIES' has a confidence of 1.0
+ ......Word 'AND' has a confidence of 1.0
+ ......Word 'EXCHANGE' has a confidence of 1.0
+ ......Word 'COMMISSION' has a confidence of 1.0
+ ...Line # 2 has word count 3 and text 'Washington, D.C. 20549' within bounding box '[3.4629, 1.1179], [5.031, 1.1179], [5.031, 1.2483], [3.4629, 1.2483]'
+ ......Word 'Washington,' has a confidence of 1.0
+ ......Word 'D.C.' has a confidence of 1.0
+```
+
+To view the entire output, visit the Azure samples repository on GitHub to view the [layout model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/v3-python-sdk-layout-output.md)
___
if __name__ == "__main__":
### Prebuilt model output
-Visit the Azure samples repository on GitHub to view the [prebuilt invoice model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/v3-python-sdk-prebuilt-invoice-output.md)
-
-## Run your application
-
-1. Navigate to the folder where you have your **form_recognizer_quickstart.py** file.
-
-1. Type the following command in your terminal:
+Here's a snippet of the expected output:
```console
-python form_recognizer_quickstart.py
+ --Recognizing invoice #1--
+ Vendor Name: CONTOSO LTD. has confidence: 0.919
+ Vendor Address: 123 456th St New York, NY, 10001 has confidence: 0.907
+ Vendor Address Recipient: Contoso Headquarters has confidence: 0.919
+ Customer Name: MICROSOFT CORPORATION has confidence: 0.84
+ Customer Id: CID-12345 has confidence: 0.956
+ Customer Address: 123 Other St, Redmond WA, 98052 has confidence: 0.909
+ Customer Address Recipient: Microsoft Corp has confidence: 0.917
+ Invoice Id: INV-100 has confidence: 0.972
+ Invoice Date: 2019-11-15 has confidence: 0.971
+ Invoice Total: CurrencyValue(amount=110.0, symbol=$) has confidence: 0.97
+ Due Date: 2019-12-15 has confidence: 0.973
```
+To view the entire output, visit the Azure samples repository on GitHub to view the [prebuilt invoice model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/v3-python-sdk-prebuilt-invoice-output.md)
+ That's it, congratulations! In this quickstart, you used the Form Recognizer Python SDK to analyze various forms in different ways. Next, explore the reference documentation to learn more about Form Recognizer v3.0 API.
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
Previously updated : 03/08/2022 Last updated : 03/16/2022
To learn more about Form Recognizer features and development options, visit our
The REST API supports the following models and capabilities:
+* 🆕 Read—Analyze and extract printed and handwritten text lines, words, locations, and detected languages.
* 🆕General document—Analyze and extract text, tables, structure, key-value pairs, and named entities. * 🆕 W-2—Analyze and extract fields from W-2 tax documents, using a pre-trained W-2 model.
-* LayoutΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
+* LayoutΓÇöAnalyze and extract tables, lines, words, and selection marks from documents, without the need to train a model.
* CustomΓÇöAnalyze and extract form fields and other content from your custom forms, using models you trained with your own form types. * InvoicesΓÇöAnalyze and extract common fields from invoices, using a pre-trained invoice model. * ReceiptsΓÇöAnalyze and extract common fields from receipts, using a pre-trained receipt model.
To learn more about Form Recognizer features and development options, visit our
## Analyze document
-Form Recognizer v3.0 consolidates the analyze document (POST) and get analyze results (GET) operations for layout, prebuilt models, and custom models into a single pair of operations by assigningΓÇ»`modelIds` to the POST and GET operations:
+Form Recognizer v3.0 consolidates the analyze document (POST) and get results (GET) operations for layout, prebuilt models, and custom models into a single pair of operations by assigningΓÇ»`modelIds` to the POST and GET operations:
```http POST /documentModels/{modelId}:analyze
In this quickstart you'll use following features to analyze and extract data and
* [cURL](https://curl.haxx.se/windows/) installed.
-* [PowerShell version 6.0+](/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line application.
+* [PowerShell version 7.*+](/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.2&preserve-view=true), or a similar command-line application.
* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
In this quickstart you'll use following features to analyze and extract data and
> [!IMPORTANT] >
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. See the Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md) article for more information.
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, *see* Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
## General document model
In this quickstart you'll use following features to analyze and extract data and
#### Request ```bash
-curl -v -i POST "https://{endpoint}/formrecognizer/documentModels/prebuilt-document:analyze?api-version=2022-01-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{'urlSource': '{your-document-url}'}"
+curl -v -i POST "{endpoint}/formrecognizer/documentModels/prebuilt-document:analyze?api-version=2022-01-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{'urlSource': '{your-document-url}'}"
``` #### Operation-Location
-You'll receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a result ID that you can use to query the status of the asynchronous operation and get the results:
+You'll receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a result ID that can be queried to get the status of the asynchronous operation:
https://{host}/formrecognizer/documentModels/{modelId}/analyzeResults/**{resultId}**?api-version=2022-01-30-preview
After you've called the **[Analyze document](https://westus.dev.cognitive.micros
#### Request ```bash
-curl -v -X GET "https://{endpoint}/formrecognizer/documentModels/prebuilt-document/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/prebuilt-document/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
``` ### Examine the response
The `"analyzeResults"` node contains all of the recognized text. Text is organiz
#### Request ```bash
-curl -v -i POST "https://{endpoint}/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2022-01-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{'urlSource': '{your-document-url}'}"
+curl -v -i POST "{endpoint}/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2022-01-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{'urlSource': '{your-document-url}'}"
``` #### Operation-Location
-You'll receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a result ID that you can use to query the status of the asynchronous operation and get the results:
+You'll receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a result ID that can be queried to get the status of the asynchronous operation:
`https://{host}/formrecognizer/documentModels/{modelId}/analyzeResults/**{resultId}**?api-version=2022-01-30-preview`
After you've called the **[Analyze document](https://westus.api.cognitive.micros
#### Request ```bash
-curl -v -X GET "https://{endpoint}/formrecognizer/documentModels/prebuilt-layout/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/prebuilt-layout/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
``` ### Examine the response
Before you run the command, make these changes:
#### Request ```bash
-curl -v -i POST "https://{endpoint}/formrecognizer/documentModels/prebuilt-invoice:analyze?api-version=2022-01-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{'urlSource': '{your-document-url}'}"
+curl -v -i POST "{endpoint}/formrecognizer/documentModels/prebuilt-invoice:analyze?api-version=2022-01-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{'urlSource': '{your-document-url}'}"
``` #### Operation-Location
-You'll receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a result ID that you can use to query the status of the asynchronous operation and get the results:
+You'll receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a result ID that can be queried to get the status of the asynchronous operation:
https://{host}/formrecognizer/documentModels/{modelId}/analyzeResults/**{resultId}**?api-version=2022-01-30-preview
After you've called the **[Analyze document](https://westus.api.cognitive.micros
#### Request ```bash
-curl -v -X GET "https://{endpoint}/formrecognizer/documentModels/prebuilt-invoice/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/prebuilt-invoice/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
``` ### Examine the response
You'll receive a `200 (Success)` response with JSON output. The first field, `"s
The preview v3.0  [List models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetModels) request returns a paged list of prebuilt models in addition to custom models. Only models with status of succeeded are included. In-progress or failed models can be enumerated via the [List Operations](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetOperations) request. Use the nextLink property to access the next page of models, if any. To get more information about each returned model, including the list of supported documents and their fields, pass the modelId to the [Get Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetOperations)request. ```bash
-curl -v -X GET "https://{endpoint}/formrecognizer/documentModels?api-version=2022-01-30-preview"
+curl -v -X GET "{endpoint}/formrecognizer/documentModels?api-version=2022-01-30-preview"
``` ### Get a specific model
curl -v -X GET "https://{endpoint}/formrecognizer/documentModels?api-version=202
The preview v3.0 [Get model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetModel) retrieves information about a specific model with a status of succeeded. For failed and in-progress models, use the [Get Operation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetOperation) to track the status of model creation operations and any resulting errors. ```bash
-curl -v -X GET "https://{endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
``` ### Delete a Model
curl -v -X GET "https://{endpoint}/formrecognizer/documentModels/{modelId}?api-v
The preview v3.0 [Delete model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/DeleteModel) request removes the custom model and the modelId can no longer be accessed by future operations. New models can be created using the same modelId without conflict. ```bash
-curl -v -X DELETE "https://{endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
+curl -v -X DELETE "{endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
``` ## Next steps
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
Title: Archive for What's new with Azure Arc-enabled servers agent description: The What's new release notes in the Overview section for Azure Arc-enabled servers agent contains six months of activity. Thereafter, the items are removed from the main article and put into this article. Previously updated : 02/28/2022 Last updated : 03/17/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.11 - September 2021
+
+### Fixed
+
+- The agent can now be installed on Windows systems with the [System objects: Require case insensitivity for non-Windows subsystems](/windows/security/threat-protection/security-policy-settings/system-objects-require-case-insensitivity-for-non-windows-subsystems) policy set to Disabled.
+- The guest configuration policy agent will now automatically retry if an error is encountered during service start or restart events.
+- Fixed an issue that prevented guest configuration audit policies from successfully executing on Linux machines.
+ ## Version 1.10 - August 2021 ### Fixed
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 03/02/2022 Last updated : 03/17/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
+## Version 1.16 - March 2022
+
+### New features
+
+- You can now granularly control which extensions are allowed to be deployed to your server and whether or not Guest Configuration should be enabled. See [local agent controls to enable or disable capabilities](security-overview.md#local-agent-security-controls) for more information.
+
+### Fixed
+
+- The "Arc" proxy bypass keyword no longer includes Azure Active Directory endpoints on Linux. Azure Storage endpoints for extension downloads are now included with the "Arc" keyword.
+ ## Version 1.15 - February 2022 ### Known issues+ - The "Arc" proxy bypass feature on Linux includes some endpoints that belong to Azure Active Directory. As a result, if you only specify the "Arc" bypass rule, traffic destined for Azure Active Directory endpoints will not use the proxy server as expected. This issue will be fixed in an upcoming release. ### New features
This page is updated monthly, so revisit it regularly. If you're looking for ite
- Added TLS 1.2 check - Azure Arc network endpoints are now required, onboarding will abort if they are not accessible - New `--skip-network-check` flag to override the new network check behavior
+ - On-demand network check now available using `azcmagent check`
- [Proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) is now available for customers using private endpoints. This allows you to send Azure Active Directory and Azure Resource Manager traffic through a proxy server, but skip the proxy server for traffic that should stay on the local network to reach private endpoints. - Oracle Linux 8 is now supported
This page is updated monthly, so revisit it regularly. If you're looking for ite
- `azcmagent_proxy remove` command on Linux now correctly removes environment variables on Red Hat Enterprise Linux and related distributions. - `azcmagent logs` now includes the computer name and timestamp to help disambiguate log files.
-## Version 1.11 - September 2021
-
-### Fixed
--- The agent can now be installed on Windows systems with the [System objects: Require case insensitivity for non-Windows subsystems](/windows/security/threat-protection/security-policy-settings/system-objects-require-case-insensitivity-for-non-windows-subsystems) policy set to Disabled.-- The guest configuration policy agent will now automatically retry if an error is encountered during service start or restart events.-- Fixed an issue that prevented guest configuration audit policies from successfully executing on Linux machines.- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 02/28/2022 Last updated : 03/17/2022
After initial deployment of the Azure Connected Machine agent, you may need to r
The azcmagent tool is used to configure the Azure Connected Machine agent during installation, or modify the initial configuration of the agent after installation. azcmagent.exe provides command-line parameters to customize the agent and view its status:
+* **check** - To troubleshoot network connectivity issues
+ * **connect** - To connect the machine to Azure Arc * **disconnect** - To disconnect the machine from Azure Arc
You can perform a **Connect** and **Disconnect** manually while logged on intera
>[!NOTE] >You must have *Administrator* permissions on Windows or *root* access permissions on Linux machines to run **azcmagent**.
+### Check
+
+This parameter allows you to run the network connectivity tests to troubleshoot networking issues between the agent and Azure services. The network connectivity check includes all [required Azure Arc network endpoints](network-requirements.md#urls), but does not include endpoints accessed by extensions you install.
+
+When running a network connectivity check, you must provide the name of the Azure region (for example, eastus) that you want to test. It's also recommended to use the `--verbose` parameter to see the results of both successful and unsuccessful tests.
+
+`azcmagent check --location <regionName> --verbose`
+ ### Connect This parameter specifies a resource in Azure Resource Manager representing the machine is created in Azure. The resource is in the subscription and resource group specified, and data about the machine is stored in the Azure region specified by the `--location` setting. The default resource name is the hostname of the machine if not specified.
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-overview.md
Title: Security overview description: Security information about Azure Arc-enabled servers. Previously updated : 08/30/2021 Last updated : 03/17/2022 # Azure Arc-enabled servers security overview
The Azure Connected Machine agent is composed of three services, which run on yo
The guest configuration and extension services run as Local System on Windows, and as root on Linux.
+## Local agent security controls
+
+Starting with agent version 1.16, you can optionally limit the extensions that can be installed on your server and disable Guest Configuration. These controls can be useful when connecting servers to Azure that need to be monitored or secured by Azure, but should not allow arbitrary management capabilities like running scripts with Custom Script Extension or configuring settings on the server with Guest Configuration.
+
+These security controls can only be configured by running a command on the server itself and cannot be modified from Azure. This approach preserves the server admin's intent when enabling remote management scenarios with Azure Arc, but also means that changing the setting is more difficult if you later decide to change them. This feature is intended for particularly sensitive servers (for example, Active Directory Domain Controllers, servers that handle payment data, and servers subject to strict change control measures). In most other cases, it is not necessary to modify these settings.
+
+### Extension allowlists and blocklists
+
+To limit which [extensions](manage-vm-extensions.md) can be installed on your server, you can configure lists of the extensions you wish to allow and block on the server. The extension manager will evaluate all requests to install, update, or upgrade extensions against the allowlist and blocklist to determine if the extension can be installed on the server. Delete requests are always allowed.
+
+The most secure option is to explicitly allow the extensions you expect to be installed. Any extension not in the allowlist is automatically blocked. To configure the Azure Connected Machine agent to allow only the Log Analytics Agent for Linux and the Dependency Agent for Linux, run the following command on each server:
+
+```bash
+azcmagent config set extensions.allowlist "Microsoft.EnterpriseCloud.Monitoring/OMSAgentForLinux,Microsoft.Azure.Monitoring.DependencyAgent/DependencyAgentLinux"
+```
+
+You can block one or more extensions by adding them to the blocklist. If an extension is present in both the allowlist and blocklist, it will be blocked. To block the Custom Script extension for Linux, run the following command:
+
+```bash
+azcmagent config set extensions.blocklist "Microsoft.Azure.Extensions/CustomScript"
+```
+
+Extensions are specified by their publisher and type, separated by a forward slash. See the list of the [most common extensions](manage-vm-extensions.md) in the docs or list the VM extensions already installed on your server in the [portal](manage-vm-extensions-portal.md#list-extensions-installed), [Azure PowerShell](manage-vm-extensions-powershell.md#list-extensions-installed), or [Azure CLI](manage-vm-extensions-cli.md#list-extensions-installed).
+
+The table below describes the behavior when performing an extension operation against an agent that has the allowlist or blocklist configured.
+
+| Operation | In the allowlist | In the blocklist | In both the allowlist and blocklist | Not in any list, but an allowlist is configured |
+|--|--|--|--|
+| Install extension | Allowed | Blocked | Blocked | Blocked |
+| Update (reconfigure) extension | Allowed | Blocked | Blocked | Blocked |
+| Upgrade extension | Allowed | Blocked | Blocked | Blocked |
+| Delete extension | Allowed | Allowed | Allowed | Allowed |
+
+> [!IMPORTANT]
+> If an extension is already installed on your server before you configure an allowlist or blocklist, it will not automatically be removed. It is your responsibility to delete the extension from Azure to fully remove it from the machine. Delete requests are always accepted to accommodate this scenario. Once deleted, the allowlist and blocklist will determine whether or not to allow future install attempts.
+
+### Enable or disable Guest Configuration
+
+Azure Policy's Guest Configuration feature enables you to audit and configure settings on your server from Azure. You can disable Guest Configuration from running on your server if you don't want to allow this functionality by running the following command:
+
+```bash
+azcmagent config set guestconfiguration.enabled false
+```
+
+When Guest Configuration is disabled, any Guest Configuration policies assigned to the machine in Azure will report as non-compliant. Consider [creating an exemption](../../governance/policy/concepts/exemption-structure.md) for these machines or [changing the scope](../../governance/policy/concepts/assignment-structure.md#excluded-scopes) of your policy assignments if you don't want to see these machines reported as non-compliant.
+
+### Locked down machine best practices
+
+When configuring the Azure Connected Machine agent with a reduced set of capabilities, it is important to consider the mechanisms that someone could use to remove those restrictions and implement appropriate controls. Anybody capable of running commands as an administrator or root user on the server can change the Azure Connected Machine agent configuration. Extensions and guest configuration policies execute in privileged contexts on your server, and as such may be able to change the agent configuration. If you apply these security controls to lock down the agent, Microsoft recommends the following best practices to ensure only local server admins can update the agent configuration:
+
+* Use allowlists for extensions instead of blocklists whenever possible.
+* Don't include the Custom Script Extension in the extension allowlist to prevent execution of arbitrary scripts that could change the agent configuration.
+* Disable Guest Configuration to prevent the use of custom Guest Configuration policies that could change the agent configuration.
+
+### Example configuration for monitoring and security scenarios
+
+It's common to use Azure Arc to monitor your servers with Azure Monitor and Microsoft Sentinel and secure them with Microsoft Defender for Cloud. The following configuration samples can help you configure the Azure Arc agent to only allow these scenarios.
+
+#### Azure Monitor Agent only
+
+On your Windows servers, run the following commands in an elevated command console:
+
+```powershell
+azcmagent config set extensions.allowlist "Microsoft.Azure.Monitor/AzureMonitorWindowsAgent"
+azcmagent config set guestconfiguration.enabled false
+```
+
+On your Linux servers, run the following commands:
+
+```bash
+sudo azcmagent config set extensions.allowlist "Microsoft.Azure.Monitor/AzureMonitorLinuxAgent"
+sudo azcmagent config set guestconfiguration.enabled false
+```
+
+#### Log Analytics and dependency (Azure Monitor VM Insights) only
+
+This configuration is for the legacy Log Analytics agents and the dependency agent.
+
+On your Windows servers, run the following commands in an elevated console:
+
+```powershell
+azcmagent config set extensions.allowlist "Microsoft.EnterpriseCloud.Monitoring/MicrosoftMonitoringAgent,Microsoft.Azure.Monitoring.DependencyAgent/DependencyAgentWindows"
+azcmagent config set guestconfiguration.enabled false
+```
+
+On your Linux servers, run the following commands:
+
+```bash
+sudo azcmagent config set extensions.allowlist "Microsoft.EnterpriseCloud.Monitoring/OMSAgentForLinux,Microsoft.Azure.Monitoring.DependencyAgent/DependencyAgentLinux"
+sudo azcmagent config set guestconfiguration.enabled false
+```
+
+#### Monitoring and security
+
+Microsoft Defender for Cloud enables additional extensions on your server to identify vulnerable software on your server and enable Microsoft Defender for Endpoint (if configured). Microsoft Defender for Cloud also uses Guest Configuration for its regulatory compliance feature. Since a custom Guest Configuration assignment could be used to undo the agent limitations, you should carefully evaluate whether or not you need the regulatory compliance feature and, as a result, Guest Configuration to be enabled on the machine.
+
+On your Windows servers, run the following commands in an elevated command console:
+
+```powershell
+azcmagent config set extensions.allowlist "Microsoft.EnterpriseCloud.Monitoring/MicrosoftMonitoringAgent,Qualys/WindowsAgent.AzureSecurityCenter,Microsoft.Azure.AzureDefenderForServers/MDE.Windows,Microsoft.Azure.AzureDefenderForSQL/AdvancedThreatProtection.Windows"
+azcmagent config set guestconfiguration.enabled true
+```
+
+On your Linux servers, run the following commands:
+
+```bash
+sudo azcmagent config set extensions.allowlist "Microsoft.EnterpriseCloud.Monitoring/OMSAgentForLinux,Qualys/LinuxAgent.AzureSecurityCenter,Microsoft.Azure.AzureDefenderForServers/MDE.Linux"
+sudo azcmagent config set guestconfiguration.enabled true
+```
+ ## Using a managed identity with Azure Arc-enabled servers By default, the Azure Active Directory system assigned identity used by Arc can only be used to update the status of the Azure Arc-enabled server in Azure. For example, the *last seen* heartbeat status. You can optionally assign other roles to the identity if an application on your server uses the system assigned identity to access other Azure services. To learn more about configuring a system-assigned managed identity to access Azure resources, see [Authenticate against Azure resources with Azure Arc-enabled servers](managed-identity-authentication.md).
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
Title: Drawing package requirements in Microsoft Azure Maps Creator
+ Title: Drawing package requirements in Microsoft Azure Maps Creator
+ description: Learn about the Drawing package requirements to convert your facility design files to map data Previously updated : 07/02/2021 Last updated : 03/18/2022
The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) converts the D
For easy reference, here are some terms and definitions that are important as you read this article.
-| Term | Definition |
-|:-|:|
-| Layer | An AutoCAD DWG layer from the drawing file.|
-| Entity | An AutoCAD DWG entity from the drawing file. |
+| Term | Definition |
+|:|:|
+| Layer | An AutoCAD DWG layer from the drawing file. |
+| Entity| An AutoCAD DWG entity from the drawing file. |
| Xref | A file in AutoCAD DWG file format, attached to the primary drawing as an external reference. | | Level | An area of a building at a set elevation. For example, the floor of a building. |
-| Feature | An instance of an object produced from the Conversion service that combines a geometry with metadata information. |
-| Feature classes | A common blueprint for features. For example, a *unit* is a feature class, and an *office* is a feature. |
+|Feature| An instance of an object produced from the Conversion service that combines a geometry with metadata information. |
+|Feature classes| A common blueprint for features. For example, a *unit* is a feature class, and an *office* is a feature. |
## Drawing package structure
The Drawing package must be zipped into a single archive file, with the .zip ext
The [Azure Maps Conversion service](/rest/api/maps/v2/conversion) does the following on each DWG file: - Extracts feature classes:
- - Levels
- - Units
- - Zones
- - Openings
- - Walls
- - Vertical penetrations
+ - Levels
+ - Units
+ - Zones
+ - Openings
+ - Walls
+ - Vertical penetrations
- Produces a *Facility* feature. - Produces a minimal set of default Category features to be referenced by other features:
- - room
- - structure
- - wall
- - opening.door
- - zone
- - facility
-
+ - room
+ - structure
+ - wall
+ - opening.door
+ - zone
+ - facility
+ ## DWG file requirements A single DWG file is required for each level of the facility. All data of a single level must be contained in a single DWG file. Any external references (_xrefs_) must be bound to the parent drawing. For example, a facility with three levels will have three DWG files in the Drawing package.
Although there are requirements when you use the manifest objects, not all objec
>[!NOTE] > Unless otherwise specified, all properties with a string property type allow for one thousand characters. -
-| Object | Required | Description |
-| :-- | :- | :- |
+| Object | Required | Description |
+| :-- | :- | :-- |
| `version` | true |Manifest schema version. Currently, only version 1.1 is supported.| | `directoryInfo` | true | Outlines the facility geographic and contact information. It can also be used to outline an occupant geographic and contact information. | | `buildingLevels` | true | Specifies the levels of the buildings and the files containing the design of the levels. |
The next sections detail the requirements for each object.
| Property | Type | Required | Description | |--||-|-|
-| `name` | string | true | Name of building. |
-| `streetAddress`| string | false | Address of building. |
-|`unit` | string | false | Unit in building. |
-| `locality` | string | false | Name of a city, town, area, neighborhood, or region.|
-| `adminDivisions` | JSON array of strings | false | An array containing address designations. For example: (Country, State) Use ISO 3166 country codes and ISO 3166-2 state/territory codes. |
-| `postalCode` | string | false | The mail sorting code. |
-| `hoursOfOperation` | string | false | Adheres to the [OSM Opening Hours](https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification) format. |
-| `phone` | string | false | Phone number associated with the building. |
-| `website` | string | false | Website associated with the building. |
-| `nonPublic` | bool | false | Flag specifying if the building is open to the public. |
-| `anchorLatitude` | numeric | false | Latitude of a facility anchor (pushpin). |
-| `anchorLongitude` | numeric | false | Longitude of a facility anchor (pushpin). |
-| `anchorHeightAboveSeaLevel` | numeric | false | Height of the facility's ground floor above sea level, in meters. |
-| `defaultLevelVerticalExtent` | numeric | false | Default height (thickness) of a level of this facility to use when a level's `verticalExtent` is undefined. |
+| `name` |string| true | Name of building. |
+| `streetAddress`|string|false| Address of building. |
+|`unit` |string| false | Unit in building. |
+|`locality` |string | false | Name of a city, town, area, neighborhood, or region.|
+|`adminDivisions`|JSON array of strings | false| An array containing address designations. For example: (Country, State) Use ISO 3166 country codes and ISO 3166-2 state/territory codes. |
+|`postalCode`|string| false | The mail sorting code. |
+|`hoursOfOperation` |string|false| Adheres to the [OSM Opening Hours](https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification) format. |
+|`phone` |string| false | Phone number associated with the building. |
+|`website` |string| false | Website associated with the building. |
+|`nonPublic`|bool| false | Flag specifying if the building is open to the public. |
+|`anchorLatitude`|numeric|false | Latitude of a facility anchor (pushpin). |
+|`anchorLongitude`|numeric |false | Longitude of a facility anchor (pushpin). |
+|`anchorHeightAboveSeaLevel`| numeric | false | Height of the facility's ground floor above sea level, in meters. |
+|`defaultLevelVerticalExtent` numeric | false | Default height (thickness) of a level of this facility to use when a level's `verticalExtent` is undefined. |
### `buildingLevels` The `buildingLevels` object contains a JSON array of buildings levels.
-| Property | Type | Required | Description |
-|--||-|-|
-|`levelName` |string |true | Descriptive level name. For example: Floor 1, Lobby, Blue Parking, or Basement.|
-|`ordinal` | integer | true | Determines the vertical order of levels. Every facility must have a level with ordinal 0. |
-|`heightAboveFacilityAnchor` | numeric | false | Level height above the anchor in meters. |
-| `verticalExtent` | numeric | false | Floor-to-ceiling height (thickness) of the level in meters. |
-|`filename` | string | true | File system path of the CAD drawing for a building level. It must be relative to the root of the building's zip file. |
+| Property | Type | Required | Description |
+|--|-|-|-|
+|`levelName`|string |true |Descriptive level name. For example: Floor 1, Lobby, Blue Parking, or Basement.|
+|`ordinal` |integer|true | Determines the vertical order of levels. Every facility must have a level with ordinal 0. |
+|`heightAboveFacilityAnchor`| numeric | false |Level height above the anchor in meters. |
+|`verticalExtent`|numeric|false| Floor-to-ceiling height (thickness) of the level in meters. |
+|`filename` |string |true |File system path of the CAD drawing for a building level. It must be relative to the root of the building's zip file. |
### `georeference` | Property | Type | Required | Description | |--||-|-|
-|`lat` | numeric | true | Decimal representation of degrees latitude at the facility drawing's origin. The origin coordinates must be in WGS84 Web Mercator (`EPSG:3857`).|
-|`lon` |numeric| true| Decimal representation of degrees longitude at the facility drawing's origin. The origin coordinates must be in WGS84 Web Mercator (`EPSG:3857`). |
-|`angle`| numeric| true| The clockwise angle, in degrees, between true north and the drawing's vertical (Y) axis. |
+|`lat`|numeric|true|Decimal representation of degrees latitude at the facility drawing's origin. The origin coordinates must be in WGS84 Web Mercator (`EPSG:3857`).|
+|`lon`|numeric|true|Decimal representation of degrees longitude at the facility drawing's origin. The origin coordinates must be in WGS84 Web Mercator (`EPSG:3857`).|
+|`angle`|numeric|true|The clockwise angle, in degrees, between true north and the drawing's vertical (Y) axis.|
### `dwgLayers`
-| Property | Type | Required | Description |
-|--||-|-|
-|`exterior` |array of strings| true| Names of layers that define the exterior building profile.|
-|`unit`| array of strings| true| Names of layers that define units.|
-|`wall`| array of strings |false| Names of layers that define walls.|
-|`door` |array of strings| false | Names of layers that define doors.|
-|`unitLabel` |array of strings| false |Names of layers that define names of units.|
-|`zone` | array of strings | false | Names of layers that define zones.|
-|`zoneLabel` | array of strings | false | Names of layers that define names of zones.|
+| Property | Type | Required | Description |
+|--||-||
+|`exterior` |array of strings|true |Names of layers that define the exterior building profile.|
+|`unit` |array of strings|false|Names of layers that define units. |
+|`wall` |array of strings|false|Names of layers that define walls. |
+|`door` |array of strings|false|Names of layers that define doors. |
+|`unitLabel`|array of strings|false|Names of layers that define names of units. |
+|`zone` |array of strings|false|Names of layers that define zones. |
+|`zoneLabel`|array of strings|false|Names of layers that define names of zones. |
### `unitProperties`
The `unitProperties` object contains a JSON array of unit properties.
| Property | Type | Required | Description | |--||-|-|
-|`unitName` |string |true |Name of unit to associate with this `unitProperty` record. This record is only valid when a label matching `unitName` is found in the `unitLabel` layers. |
-|`categoryName`| string| false |Purpose of the unit. A list of values that the provided rendering styles can make use of is available [here](https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json). |
-|`occupants` |array of directoryInfo objects |false |List of occupants for the unit. |
-|`nameAlt`| string| false| Alternate name of the unit. |
-|`nameSubtitle`| string |false| Subtitle of the unit. |
-|`addressRoomNumber`| string| false| Room, unit, apartment, or suite number of the unit.|
-|`verticalPenetrationCategory`| string| false| When this property is defined, the resulting feature is a vertical penetration (VRT) rather than a unit. You can use vertical penetrations to go to other vertical penetration features in the levels above or below it. Vertical penetration is a [Category](https://aka.ms/pa-indoor-spacecategories) name. If this property is defined, the `categoryName` property is overridden with `verticalPenetrationCategory`. |
+|`unitName`|string|true|Name of unit to associate with this `unitProperty` record. This record is only valid when a label matching `unitName` is found in the `unitLabel` layers. |
+|`categoryName`|string|false|Purpose of the unit. A list of values that the provided rendering styles can make use of is available [here](https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json).|
+|`occupants`|array of directoryInfo objects |false |List of occupants for the unit. |
+|`nameAlt`|string|false|Alternate name of the unit. |
+|`nameSubtitle`|string|false|Subtitle of the unit. |
+|`addressRoomNumber`|string|false|Room, unit, apartment, or suite number of the unit.|
+|`verticalPenetrationCategory`|string|false| When this property is defined, the resulting feature is a vertical penetration (VRT) rather than a unit. You can use vertical penetrations to go to other vertical penetration features in the levels above or below it. Vertical penetration is a [Category](https://aka.ms/pa-indoor-spacecategories) name. If this property is defined, the `categoryName` property is overridden with `verticalPenetrationCategory`. |
|`verticalPenetrationDirection`| string| false |If `verticalPenetrationCategory` is defined, optionally define the valid direction of travel. The permitted values are: `lowToHigh`, `highToLow`, `both`, and `closed`. The default value is `both`. The value is case-sensitive.| | `nonPublic` | bool | false | Indicates if the unit is open to the public. | | `isRoutable` | bool | false | When this property is set to `false`, you can't go to or through the unit. The default value is `true`. |
The `zoneProperties` object contains a JSON array of zone properties.
| Property | Type | Required | Description | |--||-|-|
-|zoneName |string |true |Name of zone to associate with `zoneProperty` record. This record is only valid when a label matching `zoneName` is found in the `zoneLabel` layer of the zone. |
-|categoryName| string| false |Purpose of the zone. A list of values that the provided rendering styles can make use of is available [here](https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json).|
-|zoneNameAlt| string| false |Alternate name of the zone. |
-|zoneNameSubtitle| string | false |Subtitle of the zone. |
-|zoneSetId| string | false | Set ID to establish a relationship among multiple zones so that they can be queried or selected as a group. For example, zones that span multiple levels. |
+|zoneName |string |true |Name of zone to associate with `zoneProperty` record. This record is only valid when a label matching `zoneName` is found in the `zoneLabel` layer of the zone. |
+|categoryName| string| false |Purpose of the zone. A list of values that the provided rendering styles can make use of is available [here](https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json).|
+|zoneNameAlt| string| false |Alternate name of the zone. |
+|zoneNameSubtitle| string | false |Subtitle of the zone. |
+|zoneSetId| string | false | Set ID to establish a relationship among multiple zones so that they can be queried or selected as a group. For example, zones that span multiple levels. |
### Sample Drawing package manifest
-Below is the manifest file for the sample Drawing package. To download the entire package, see [sample Drawing package](https://github.com/Azure-Samples/am-creator-indoor-data-examples).
+Below is the manifest file for the sample Drawing package. Go to the [Sample Drawing package for Azure Maps Creator](https://github.com/Azure-Samples/am-creator-indoor-data-examples) on GitHub to download the entire package.
#### Manifest file
When your Drawing package meets the requirements, you can use the [Azure Maps Co
> [Drawing Package Guide](drawing-package-guide.md) > [!div class="nextstepaction"]
->[Creator for indoor maps](creator-indoor-maps.md)
+> [Creator for indoor maps](creator-indoor-maps.md)
> [!div class="nextstepaction"] > [Tutorial: Creating a Creator indoor map](tutorial-creator-indoor-maps.md) > [!div class="nextstepaction"]
-> [Indoor maps dynamic styling](indoor-map-dynamic-styling.md)
+> [Indoor maps dynamic styling](indoor-map-dynamic-styling.md)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Previously updated : 07/22/2021 Last updated : 03/16/2021 # Overview of Azure Monitor agents
Last updated 07/22/2021
Virtual machines and other compute resources require an agent to collect monitoring data required to measure the performance and availability of their guest operating system and workloads. This article describes the agents used by Azure Monitor and helps you determine which you need to meet the requirements for your particular environment. > [!NOTE]
-> Azure Monitor recently launched a new agent, the Azure Monitor agent, that provides all capabilities necessary to collect guest operating system monitoring data. While there are multiple legacy agents that exist due to the consolidation of Azure Monitor and Log Analytics, each with their unique capabilities with some overlap, we recommend that you use the new agent that aims to consolidate features from all existing agents, and provide additional benefits. [Learn More](./azure-monitor-agent-overview.md)
+> Azure Monitor recently launched a new agent, the [Azure Monitor agent](./azure-monitor-agent-overview.md), that provides all capabilities necessary to collect guest operating system monitoring data. **Use this new agent if you don't require [these current limitations](./azure-monitor-agent-overview.md#current-limitations)**, as it consolidates the features of all the legacy agents listed below and provides additional benefits. If you do require the limitations today, you may continue using the other legacy agents listed below until **August 2024**. [Learn more](./azure-monitor-agent-overview.md)
## Summary of agents
-The following tables provide a quick comparison of the Azure Monitor agents for Windows and Linux. Further detail on each is provided in the section below.
+The following tables provide a quick comparison of the telemetry agents for Windows and Linux. Further detail on each is provided in the section below.
### Windows agents
Use the Azure Monitor agent if you need to:
- Use different [solutions](../monitor-reference.md#insights-and-curated-visualizations) to monitor a particular service or application. */ -->
-Limitations of the Azure Monitor Agent include:
--- Not yet supported by all features in production. See [Supported services and features](./azure-monitor-agent-overview.md#supported-services-and-features).-- No support yet for networking scenarios involving private links. -- No support yet collecting custom logs (files) or IIS log files. -- No support yet for Event Hubs and Storage accounts as destinations.-- No support for Hybrid Runbook workers.
+When compared with the legacy agents, the Azure Monitor Agent has [these limitations currently](./azure-monitor-agent-overview.md#current-limitations).
## Log Analytics agent
-The [Log Analytics agent](./log-analytics-agent.md) collects monitoring data from the guest operating system and workloads of virtual machines in Azure, other cloud providers, and on-premises machines. It sends data to a Log Analytics workspace. The Log Analytics agent is the same agent used by System Center Operations Manager, and you can multihome agent computers to communicate with your management group and Azure Monitor simultaneously. This agent is also required by certain insights in Azure Monitor and other services in Azure.
+The legacy [Log Analytics agent](./log-analytics-agent.md) collects monitoring data from the guest operating system and workloads of virtual machines in Azure, other cloud providers, and on-premises machines. It sends data to a Log Analytics workspace. The Log Analytics agent is the same agent used by System Center Operations Manager, and you can multihome agent computers to communicate with your management group and Azure Monitor simultaneously. This agent is also required by certain insights in Azure Monitor and other services in Azure.
> [!NOTE] > The Log Analytics agent for Windows is often referred to as Microsoft Monitoring Agent (MMA). The Log Analytics agent for Linux is often referred to as OMS agent.
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
Title: Using data collection endpoints with Azure Monitor agent (preview)
+ Title: Using data collection endpoints with Azure Monitor agent
description: Use data collection endpoints to uniquely configure ingestion settings for your machines. Previously updated : 1/5/2022 Last updated : 3/16/2022
-# Using data collection endpoints with Azure Monitor agent (preview)
+# Using data collection endpoints with Azure Monitor agent
[Data Collection Endpoints (DCEs)](../essentials/data-collection-endpoint-overview.md) allow you to uniquely configure ingestion settings for your machines, giving you greater control over your networking requirements. ## Create data collection endpoint
-See [Data collection endpoints in Azure Monitor (preview)](../essentials/data-collection-endpoint-overview.md) for details on data collection endpoints and how to create them.
+See [Data collection endpoints in Azure Monitor](../essentials/data-collection-endpoint-overview.md) for details on data collection endpoints and how to create them.
## Create endpoint association in Azure portal Use **Data collection rules** in the portal to associate endpoints with a resource (e.g. a virtual machine) or a set of resources. Create a new rule or open an existing rule. In the **Resources** tab, click on the **Data collection endpoint** drop-down to associate an existing endpoint for your resource in the same region (or select multiple resources in the same region to bulk-assign an endpoint for them). Doing this creates an association per resource which links the endpoint to the resource. The Azure Monitor agent running on these resources will now start using the endpoint instead for uploading data to Azure Monitor.
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
description: Overview of the Azure Monitor agent, which collects monitoring data
Previously updated : 3/9/2022 Last updated : 3/16/2022
The Azure Monitor agent (AMA) collects monitoring data from the guest operating
Here's an **introductory video** explaining all about this new agent, including a quick demo of how to set things up using the Azure Portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs) ## Relationship to other agents
-The Azure Monitor agent replaces the following legacy agents that are currently used by Azure Monitor to collect guest data from virtual machines ([view known gaps](../faq.yml)):
+The Azure Monitor agent is meant to replace the following legacy monitoring agents that are currently used by Azure Monitor to collect guest data from virtual machines ([view known gaps](../faq.yml)):
- [Log Analytics agent](./log-analytics-agent.md): Sends data to a Log Analytics workspace and supports VM insights and monitoring solutions. - [Diagnostics extension](./diagnostics-extension-overview.md): Sends data to Azure Monitor Metrics (Windows only), Azure Event Hubs, and Azure Storage.
The following table shows the current support for the Azure Monitor agent with o
| Azure service | Current support | More information | |:|:|:| | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Private preview | [Sign-up link](https://aka.ms/AMAgent) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Linux Syslog CEF (Common Event Format): Private preview</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>[Sign-up link](https://aka.ms/AMAgent)</li><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
The following table shows the current support for the Azure Monitor agent with Azure Monitor features.
The following table shows the current support for the Azure Monitor agent with A
| File based logs and Windows IIS logs | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | Windows Client OS installer | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | [VM insights](../vm/vminsights-overview.md) | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
-| [Connect using private links](azure-monitor-agent-data-collection-endpoint.md) | Public preview | No sign-up needed |
The following table shows the current support for the Azure Monitor agent with Azure solutions.
The Azure Monitor agent extensions for Windows and Linux can communicate either
![Flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
-2. After the values for the *settings* and *protectedSettings* parameters are determined, provide these additional parameters when you deploy the Azure Monitor agent by using PowerShell commands. The following examples are for Azure virtual machines.
-
- | Parameter | Value |
- |:|:|
- | settingsHashtable | A JSON object from the preceding flowchart converted to a hashtable. Skip if not applicable. An example is {"proxy":{"mode":"application","address":"http://[address]:[port]","auth": false}}. |
- | protectedSettingsHashtable | A JSON object from the preceding flowchart converted to a hashtable. Skip if not applicable. An example is {"proxy":{"username": "[username]","password": "[password]"}}. |
+2. After the values for the *settings* and *protectedSettings* parameters are determined, **provide these additional parameters** when you deploy the Azure Monitor agent by using PowerShell commands. Refer the following examples.
# [Windows VM](#tab/PowerShellWindows) ```powershell
-$settingsHashtable = @{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": false}};
-$protectedSettingsHashtable = @{"proxy":{"username": "[username]","password": "[password]"}};
+$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
+$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -Settings <settingsHashtable> -ProtectedSettings <protectedSettingsHashtable>
+Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.0 -Settings $settingsString -ProtectedSettings $protectedSettingsString
``` # [Linux VM](#tab/PowerShellLinux) ```powershell
-$settingsHashtable = @{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": false}};
+$settingsHashtable = @{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}};
$protectedSettingsHashtable = @{"proxy":{"username": "[username]","password": "[password]"}};
-Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -Settings <settingsHashtable> -ProtectedSettings <protectedSettingsHashtable>
+Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -Settings $settingsString -ProtectedSettings $protectedSettingsString
``` # [Windows Arc enabled server](#tab/PowerShellWindowsArc) ```powershell
-$settingsHashtable = @{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": false}};
-$protectedSettingsHashtable = @{"proxy":{"username": "[username]","password": "[password]"}};
+$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
+$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Settings <settingsHashtable> -ProtectedSettings <protectedSettingsHashtable>
+New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Settings $settingsString -ProtectedSettings $protectedSettingsString
``` # [Linux Arc enabled server](#tab/PowerShellLinuxArc) ```powershell
-$settingsHashtable = @{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": false}};
-$protectedSettingsHashtable = @{"proxy":{"username": "[username]","password": "[password]"}};
+$settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}';
+$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
-New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Settings <settingsHashtable> -ProtectedSettings <protectedSettingsHashtable>
+New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Settings $settingsString -ProtectedSettings $protectedSettingsString
```
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Title: Configure data collection for the Azure Monitor agent description: Describes how to create a data collection rule to collect data from virtual machines using the Azure Monitor agent. Previously updated : 03/1/2022 Last updated : 03/16/2022
Additionally, choose the appropriate **Platform Type** which specifies the type
In the **Resources** tab, add the resources (virtual machines, virtual machine scale sets, Arc for servers) that should have the Data Collection Rule applied. The Azure Monitor Agent will be installed on resources that don't already have it installed, and will enable Azure Managed Identity as well.
-### Private link configuration using data collection endpoints (preview)
+### Private link configuration using data collection endpoints
If you need network isolation using private links for collecting data using agents from your resources, simply select existing endpoints (or create a new endpoint) from the same region for the respective resource(s) as shown below. See [how to create data collection endpoint](../essentials/data-collection-endpoint-overview.md). [![Data Collection Rule virtual machines](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox)
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
To determine how long data is kept, see [Data retention and privacy](./data-rete
## Reference docs
-* [ASP.NET reference](/dotnet/api/overview/azure/insights)
+* [.NET reference](/dotnet/api/overview/azure/insights)
* [Java reference](/java/api/overview/azure/appinsights) * [JavaScript reference](https://github.com/Microsoft/ApplicationInsights-JS/blob/master/API-reference.md) ## SDK code
-* [ASP.NET Core SDK](https://github.com/Microsoft/ApplicationInsights-dotnet)
-* [ASP.NET](https://github.com/Microsoft/ApplicationInsights-dotnet)
+* [.NET](https://github.com/Microsoft/ApplicationInsights-dotnet)
* [Windows Server packages](https://github.com/Microsoft/ApplicationInsights-dotnet) * [Java SDK](https://github.com/Microsoft/ApplicationInsights-Java) * [Node.js SDK](https://github.com/Microsoft/ApplicationInsights-Node.js)
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
Below are SDKs/scenarios not supported in the Public Preview:
1. Follow the configuration guidance per language below.
-### [ASP.NET and .NET](#tab/net)
+### [.NET](#tab/net)
> [!NOTE] > Support for Azure AD in the Application Insights .NET SDK is included starting with [version 2.18-Beta3](https://www.nuget.org/packages/Microsoft.ApplicationInsights/2.18.0-beta3).
config.SetAzureTokenCredential(credential);
```
-Below is an example of configuring the `TelemetryConfiguration` using ASP.NET Core:
+Below is an example of configuring the `TelemetryConfiguration` using .NET Core:
```csharp services.Configure<TelemetryConfiguration>(config => {
Next steps should be to review the Application Insights resource's access contro
### Language specific troubleshooting
-### [ASP.NET and .NET](#tab/net)
+### [.NET](#tab/net)
#### Event Source
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
In this example, this connection string specifies explicit overrides for every s
## How to set a connection string Connection Strings are supported in the following SDK versions:-- .NET and .NET Core v2.12.0
+- .NET v2.12.0
- Java v2.5.1 and Java 3.0 - JavaScript v2.3.0 - NodeJS v1.5.0
var configuration = new TelemetryConfiguration
</ApplicationInsights> ```
-NetCore Explicitly Set:
+.NET Core Explicitly Set:
```csharp public void ConfigureServices(IServiceCollection services) {
public void ConfigureServices(IServiceCollection services)
} ```
-NetCore config.json:
+.NET Core config.json:
```json {
azure-monitor Data Collection Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md
Title: Data collection endpoints in Azure Monitor (preview)
+ Title: Data collection endpoints in Azure Monitor
description: Overview of data collection endpoints (DCEs) in Azure Monitor including their contents and structure and how you can create and work with them. Previously updated : 02/21/2022 Last updated : 03/16/2022
-# Data collection endpoints in Azure Monitor (preview)
+# Data collection endpoints in Azure Monitor
Data Collection Endpoints (DCEs) allow you to uniquely configure ingestion settings for Azure Monitor. This article provides an overview of data collection endpoints including their contents and structure and how you can create and work with them. ## Workflows that use DCEs The following workflows currently use DCEs: -- [Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md))
+- [Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md)
- [Custom logs](../logs/custom-logs-overview.md) ## Components of a data collection endpoint
azure-monitor Service Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/service-map.md
Linux:
- Memory(*)\\% Used Memory - Network Adapter(*)\\Bytes Sent/sec - Network Adapter(*)\\Bytes Received/sec-
-To get the network performance data, you must also have enabled the Wire Data 2.0 solution in your workspace.
## Security integration
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 03/08/2022 Last updated : 03/18/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files volumes are designed to be contained in a special purpose sub
Azure NetApp Files standard network features are supported for the following regions: * Australia Central
+* East US 2
* France Central * North Central US * South Central US
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 02/16/2022 Last updated : 03/18/2022 # Solution architectures using Azure NetApp Files
This section provides references to SAP on Azure solutions.
* [SAP System on Oracle Database on Azure - Azure Architecture Center](/azure/architecture/example-scenario/apps/sap-on-oracle) * [Oracle Azure Virtual Machines DBMS deployment for SAP workload - Azure Virtual Machines](../virtual-machines/workloads/sap/dbms_guide_oracle.md#oracle-configuration-guidelines-for-sap-installations-in-azure-vms-on-linux) * [Deploy SAP AnyDB (Oracle 19c) with Azure NetApp Files](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-anydb-oracle-19c-with-azure-netapp-files/ba-p/2064043)
+* [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408)
* [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload using Azure NetApp Files](../virtual-machines/workloads/sap/dbms_guide_ibm.md#using-azure-netapp-files) ### SAP IQ-NLS
azure-netapp-files Backup Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md
na Previously updated : 10/07/2021 Last updated : 03/18/2022 # Requirements and considerations for Azure NetApp Files backup
Azure NetApp Files backup in a region can only protect an Azure NetApp Files vol
* In a cross-region replication setting, Azure NetApp Files backup can be configured on a source volume only. It is not supported on a cross-region replication *destination* volume.
+* [Reverting a volume using snapshot revert](snapshots-revert-volume.md) is not supported on Azure NetApp Files volumes that have backups.
++ ## Next steps * [Understand Azure NetApp Files backup](backup-introduction.md)
azure-netapp-files Snapshots Revert Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-revert-volume.md
na Previously updated : 09/16/2021 Last updated : 03/18/2022
You can find the Revert Volume option in the Snapshots menu of a volume. After y
> [!IMPORTANT] > Active filesystem data and snapshots that were taken after the selected snapshot will be lost. The snapshot revert operation will replace *all* the data in the targeted volume with the data in the selected snapshot. You should pay attention to the snapshot contents and creation date when you select a snapshot. You cannot undo the snapshot revert operation.
+## Considerations
+
+* Reverting a volume using snapshot revert is not supported on [Azure NetApp Files volumes that have backups](backup-requirements-considerations.md).
++ ## Steps 1. Go to the **Snapshots** menu of a volume. Right-click the snapshot you want to use for the revert operation. Select **Revert volume**.
type the name of the volume, and click **Revert**.
* [Learn more about snapshots](snapshots-introduction.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) * [Azure NetApp Files Snapshots 101 video](https://www.youtube.com/watch?v=uxbTXhtXCkw)
-* [Azure NetApp Files Snapshot Overview](https://anfcommunity.com/2021/01/31/azure-netapp-files-snapshot-overview/)
+* [Azure NetApp Files Snapshot Overview](https://anfcommunity.com/2021/01/31/azure-netapp-files-snapshot-overview/)
azure-resource-manager Copy Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/copy-outputs.md
This article shows you how to create more than one value for an output in your A
You can also use copy loop with [resources](copy-resources.md), [properties in a resource](copy-properties.md), and [variables](copy-variables.md).
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [loops](../bicep/loops.md).
+ ## Syntax Add the `copy` element to the output section of your template to return a number of items. The copy element has the following general format:
The preceding example returns an array with the following values:
- [Property iteration in ARM templates](copy-properties.md) - [Variable iteration in ARM templates](copy-variables.md) - If you want to learn about the sections of a template, see [Understand the structure and syntax of ARM templates](./syntax.md).-- To learn how to deploy your template, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
+- To learn how to deploy your template, see [Deploy resources with ARM templates and Azure PowerShell](deploy-powershell.md).
azure-resource-manager Copy Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/copy-properties.md
You can only use copy loop with top-level resources, even when applying copy loo
You can also use copy loop with [resources](copy-resources.md), [variables](copy-variables.md), and [outputs](copy-outputs.md).
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [loops](../bicep/loops.md).
+ ## Syntax Add the `copy` element to the resources section of your template to set the number of items for a property. The copy element has the following general format:
azure-resource-manager Copy Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/copy-resources.md
You can also use copy loop with [properties](copy-properties.md), [variables](co
If you need to specify whether a resource is deployed at all, see [condition element](conditional-resource-deployment.md). +
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [loops](../bicep/loops.md).
+ ## Syntax Add the `copy` element to the resources section of your template to deploy multiple instances of the resource. The `copy` element has the following general format:
azure-resource-manager Copy Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/copy-variables.md
This article shows you how to create more than one value for a variable in your
You can also use copy with [resources](copy-resources.md), [properties in a resource](copy-properties.md), and [outputs](copy-outputs.md).
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [loops](../bicep/loops.md).
+ ## Syntax The copy element has the following general format:
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-cli.md
The deployment commands changed in Azure CLI version 2.2.0. The examples in this
If you don't have Azure CLI installed, you can use Azure Cloud Shell. For more information, see [Deploy ARM templates from Azure Cloud Shell](deploy-cloud-shell.md).
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [How to deploy resources with Bicep and Azure CLI](../bicep/deploy-cli.md).
++ [!INCLUDE [permissions](../../../includes/template-deploy-permissions.md)] ## Deployment scope
az deployment group create \
--resource-group testgroup \ --template-file <path-to-template> \ --parameters $params
-```
+```
However, if you're using Azure CLI with Windows Command Prompt (CMD) or PowerShell, set the variable to a JSON string. Escape the quotation marks: `$params = '{ \"prefix\": {\"value\":\"start\"}, \"suffix\": {\"value\":\"end\"} }'`.
To deploy a template with multi-line strings or comments using Azure CLI with ve
* To roll back to a successful deployment when you get an error, see [Rollback on error to successful deployment](rollback-on-error.md). * To specify how to handle resources that exist in the resource group but aren't defined in the template, see [Azure Resource Manager deployment modes](deployment-modes.md). * To understand how to define parameters in your template, see [Understand the structure and syntax of ARM templates](./syntax.md).
-* For tips on resolving common deployment errors, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](common-deployment-errors.md).
+* For tips on resolving common deployment errors, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](common-deployment-errors.md).
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-powershell.md
This article explains how to use Azure PowerShell with Azure Resource Manager templates (ARM templates) to deploy your resources to Azure. If you aren't familiar with the concepts of deploying and managing your Azure solutions, see [template deployment overview](overview.md).
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [Deploy resources with Bicep and Azure PowerShell](../bicep/deploy-powershell.md).
+ ## Prerequisites You need a template to deploy. If you don't already have one, download and save an [example template](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json) from the Azure Quickstart templates repo. The local file name used in this article is _C:\MyTemplates\azuredeploy.json_.
azure-resource-manager Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/linked-templates.md
For a tutorial, see [Tutorial: Deploy a linked template](./deployment-tutorial-l
> If the linked or nested template targets a different resource group, that deployment uses incremental mode. >
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [modules](../bicep/modules.md).
+ ## Nested template To nest a template, add a [deployments resource](/azure/templates/microsoft.resources/deployments) to your main template. In the `template` property, specify the template syntax.
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/parameter-files.md
Rather than passing parameters as inline values in your script, you can use a JSON file that contains the parameter values. This article shows how to create a parameter file that you use with a JSON template.
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [parameter files](../bicep/parameter-files.md).
+ ## Parameter file A parameter file uses the following format:
If your template includes a parameter with the same name as one of the parameter
## Next steps - For more information about how to define parameters in a template, see [Parameters in ARM templates](./parameters.md).-- For more information about using values from a key vault, see [Use Azure Key Vault to pass secure parameter value during deployment](key-vault-parameter.md).
+- For more information about using values from a key vault, see [Use Azure Key Vault to pass secure parameter value during deployment](key-vault-parameter.md).
azure-resource-manager Resource Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-dependency.md
When deploying resources, you may need to make sure some resources exist before
Azure Resource Manager evaluates the dependencies between resources, and deploys them in their dependent order. When resources aren't dependent on each other, Resource Manager deploys them in parallel. You only need to define dependencies for resources that are deployed in the same template.
+> [!TIP]
+> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [resource dependencies](../bicep/resource-dependencies.md).
+ ## dependsOn Within your Azure Resource Manager template (ARM template), the `dependsOn` element enables you to define one resource as a dependent on one or more resources. Its value is a JavaScript Object Notation (JSON) array of strings, each of which is a resource name or ID. The array can include resources that are [conditionally deployed](conditional-resource-deployment.md). When a conditional resource isn't deployed, Azure Resource Manager automatically removes it from the required dependencies.
For information about assessing the deployment order and resolving dependency er
* For recommendations when setting dependencies, see [ARM template best practices](./best-practices.md). * To learn about troubleshooting dependencies during deployment, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](common-deployment-errors.md). * To learn about creating Azure Resource Manager templates, see [Understand the structure and syntax of ARM templates](./syntax.md).
-* For a list of the available functions in a template, see [ARM template functions](template-functions.md).
+* For a list of the available functions in a template, see [ARM template functions](template-functions.md).
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/syntax.md
The available properties for a parameter are:
For examples of how to use parameters, see [Parameters in ARM templates](./parameters.md).
+In Bicep, see [parameters](../bicep/file.md#parameters).
+ ## Variables In the `variables` section, you construct values that can be used throughout your template. You don't need to define variables, but they often simplify your template by reducing complex expressions. The format of each variable matches one of the [data types](data-types.md).
For information about using `copy` to create several values for a variable, see
For examples of how to use variables, see [Variables in ARM template](./variables.md).
+In Bicep, see [variables](../bicep/file.md#variables).
+ ## Functions Within your template, you can create your own functions. These functions are available for use in your template. Typically, you define complicated expressions that you don't want to repeat throughout your template. You create the user-defined functions from expressions and [functions](template-functions.md) that are supported in templates.
When defining a user function, there are some restrictions:
For examples of how to use custom functions, see [User-defined functions in ARM template](./user-defined-functions.md).
+In Bicep, user-defined functions aren't supported. Bicep does support a variety of [functions](../bicep/bicep-functions.md) and [operators](../bicep/operators.md).
+ ## Resources In the `resources` section, you define the resources that are deployed or updated.
You define resources with the following structure:
| properties |No |Resource-specific configuration settings. The values for the properties are the same as the values you provide in the request body for the REST API operation (PUT method) to create the resource. You can also specify a copy array to create several instances of a property. To determine available values, see [template reference](/azure/templates/). | | resources |No |Child resources that depend on the resource being defined. Only provide resource types that are permitted by the schema of the parent resource. Dependency on the parent resource isn't implied. You must explicitly define that dependency. See [Set name and type for child resources](child-resource-name-type.md). |
+In Bicep, see [resources](../bicep/file.md#resources).
+ ## Outputs In the `outputs` section, you specify values that are returned from deployment. Typically, you return values from resources that were deployed.
The following example shows the structure of an output definition:
For examples of how to use outputs, see [Outputs in ARM template](./outputs.md).
+In Bicep, see [outputs](../bicep/file.md#outputs).
+ <a id="comments"></a> ## Comments and metadata
In Visual Studio Code, the [Azure Resource Manager Tools extension](quickstart-c
![Visual Studio Code Azure Resource Manager template mode](./media/template-syntax/resource-manager-template-editor-mode.png)
+In Bicep, see [comments](../bicep/file.md#comments).
+ ### Metadata You can add a `metadata` object almost anywhere in your template. Resource Manager ignores the object, but your JSON editor may warn you that the property isn't valid. In the object, define the properties you need.
You can break a string into multiple lines. For example, see the `location` prop
], ```
+In Bicep, see [multi-line strings](../bicep/file.md#multi-line-strings).
+ ## Next steps * To view complete templates for many different types of solutions, see the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/).
azure-sql Performance Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/performance-guidance.md
Previously updated : 07/26/2021 Last updated : 03/18/2022 # Tune applications and databases for performance in Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
Although Azure SQL Database and Azure SQL Managed Instance service tiers are des
Databases that exceed the resources of the highest Premium compute size might benefit from scaling out the workload. For more information, see [Cross-database sharding](#cross-database-sharding) and [Functional partitioning](#functional-partitioning). -- **Applications that have sub-optimal queries**
+- **Applications that have suboptimal queries**
Applications, especially those in the data access layer, that have poorly tuned queries might not benefit from a higher compute size. This includes queries that lack a WHERE clause, have missing indexes, or have outdated statistics. These applications benefit from standard query performance-tuning techniques. For more information, see [Missing indexes](#identifying-and-adding-missing-indexes) and [Query tuning and hinting](#query-tuning-and-hinting). -- **Applications that have sub-optimal data access design**
+- **Applications that have suboptimal data access design**
Applications that have inherent data access concurrency issues, for example deadlocking, might not benefit from a higher compute size. Consider reducing round trips against the database by caching data on the client side with the Azure Caching service or another caching technology. See [Application tier caching](#application-tier-caching).
After it's created, that same SELECT statement picks a different plan, which use
![A query plan with corrected indexes](./media/performance-guidance/query_plan_corrected_indexes.png)
-The key insight is that the IO capacity of a shared, commodity system is more limited than that of a dedicated server machine. There's a premium on minimizing unnecessary IO to take maximum advantage of the system in the resources of each compute size of the service tiers. Appropriate physical database design choices can significantly improve the latency for individual queries, improve the throughput of concurrent requests handled per scale unit, and minimize the costs required to satisfy the query. For more information about the missing index DMVs, see [sys.dm_db_missing_index_details](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-missing-index-details-transact-sql).
+The key insight is that the IO capacity of a shared, commodity system is more limited than that of a dedicated server machine. There's a premium on minimizing unnecessary IO to take maximum advantage of the system in the resources of each compute size of the service tiers. Appropriate physical database design choices can significantly improve the latency for individual queries, improve the throughput of concurrent requests handled per scale unit, and minimize the costs required to satisfy the query.
+
+For more information about tuning indexes using missing index requests, see [Tune nonclustered indexes with missing index suggestions](/sql/relational-databases/indexes/tune-nonclustered-missing-index-suggestions).
### Query tuning and hinting The query optimizer in Azure SQL Database and Azure SQL Managed Instance is similar to the traditional SQL Server query optimizer. Most of the best practices for tuning queries and understanding the reasoning model limitations for the query optimizer also apply to Azure SQL Database and Azure SQL Managed Instance. If you tune queries in Azure SQL Database and Azure SQL Managed Instance, you might get the additional benefit of reducing aggregate resource demands. Your application might be able to run at a lower cost than an un-tuned equivalent because it can run at a lower compute size.
-An example that is common in SQL Server and which also applies to Azure SQL Database and Azure SQL Managed Instance is how the query optimizer "sniffs" parameters. During compilation, the query optimizer evaluates the current value of a parameter to determine whether it can generate a more optimal query plan. Although this strategy often can lead to a query plan that is significantly faster than a plan compiled without known parameter values, currently it works imperfectly both in SQL Server, in Azure SQL Database, and Azure SQL Managed Instance. Sometimes the parameter is not sniffed, and sometimes the parameter is sniffed but the generated plan is sub-optimal for the full set of parameter values in a workload. Microsoft includes query hints (directives) so that you can specify intent more deliberately and override the default behavior of parameter sniffing. Often, if you use hints, you can fix cases in which the default SQL Server, Azure SQL Database, and Azure SQL Managed Instance behavior is imperfect for a specific customer workload.
+An example that is common in SQL Server and which also applies to Azure SQL Database and Azure SQL Managed Instance is how the query optimizer "sniffs" parameters. During compilation, the query optimizer evaluates the current value of a parameter to determine whether it can generate a more optimal query plan. Although this strategy often can lead to a query plan that is significantly faster than a plan compiled without known parameter values, currently it works imperfectly both in SQL Server, in Azure SQL Database, and Azure SQL Managed Instance. Sometimes the parameter is not sniffed, and sometimes the parameter is sniffed but the generated plan is suboptimal for the full set of parameter values in a workload. Microsoft includes query hints (directives) so that you can specify intent more deliberately and override the default behavior of parameter sniffing. Often, if you use hints, you can fix cases in which the default SQL Server, Azure SQL Database, and Azure SQL Managed Instance behavior is imperfect for a specific customer workload.
-The next example demonstrates how the query processor can generate a plan that is sub-optimal both for performance and resource requirements. This example also shows that if you use a query hint, you can reduce query run time and resource requirements for your database:
+The next example demonstrates how the query processor can generate a plan that is suboptimal both for performance and resource requirements. This example also shows that if you use a query hint, you can reduce query run time and resource requirements for your database:
```sql DROP TABLE psptest1;
CREATE TABLE t1 (col1 int primary key, col2 int, col3 binary(200));
GO ```
-The setup code creates a table that has skewed data distribution. The optimal query plan differs based on which parameter is selected. Unfortunately, the plan caching behavior doesn't always recompile the query based on the most common parameter value. So, it's possible for a sub-optimal plan to be cached and used for many values, even when a different plan might be a better plan choice on average. Then the query plan creates two stored procedures that are identical, except that one has a special query hint.
+The setup code creates a table that has skewed data distribution. The optimal query plan differs based on which parameter is selected. Unfortunately, the plan caching behavior doesn't always recompile the query based on the most common parameter value. So, it's possible for a suboptimal plan to be cached and used for many values, even when a different plan might be a better plan choice on average. Then the query plan creates two stored procedures that are identical, except that one has a special query hint.
```sql -- Prime Procedure Cache with scan plan
Each part of this example attempts to run a parameterized insert statement 1,000
![Query tuning by using a scan plan](./media/performance-guidance/query_tuning_1.png)
-Because we executed the procedure by using the value 1, the resulting plan was optimal for the value 1 but was sub-optimal for all other values in the table. The result likely isn't what you would want if you were to pick each plan randomly, because the plan performs more slowly and uses more resources.
+Because we executed the procedure by using the value 1, the resulting plan was optimal for the value 1 but was suboptimal for all other values in the table. The result likely isn't what you would want if you were to pick each plan randomly, because the plan performs more slowly and uses more resources.
If you run the test with `SET STATISTICS IO` set to `ON`, the logical scan work in this example is done behind the scenes. You can see that there are 1,148 reads done by the plan (which is inefficient, if the average case is to return just one row):
ORDER BY start_time DESC
![Query tuning example results](./media/performance-guidance/query_tuning_4.png) > [!NOTE]
-> Although the volume in this example is intentionally small, the effect of sub-optimal parameters can be substantial, especially on larger databases. The difference, in extreme cases, can be between seconds for fast cases and hours for slow cases.
+> Although the volume in this example is intentionally small, the effect of suboptimal parameters can be substantial, especially on larger databases. The difference, in extreme cases, can be between seconds for fast cases and hours for slow cases.
You can examine **sys.resource_stats** to determine whether the resource for a test uses more or fewer resources than another test. When you compare data, separate the timing of tests so that they are not in the same 5-minute window in the **sys.resource_stats** view. The goal of the exercise is to minimize the total amount of resources used, and not to minimize the peak resources. Generally, optimizing a piece of code for latency also reduces resource consumption. Make sure that the changes you make to an application are necessary, and that the changes don't negatively affect the customer experience for someone who might be using query hints in the application.
To learn more about the script and get started, visit the [wiki](https://aka.ms/
- Read [What is an Azure elastic pool?](elastic-pool-overview.md) - Discover [When to consider an elastic pool](elastic-pool-overview.md) - Read about [Monitoring Microsoft Azure SQL Database and Azure SQL Managed Instance performance using dynamic management views](monitoring-with-dmvs.md)-- Learn to [Diagnose and troubleshoot high CPU on Azure SQL Database](high-cpu-diagnose-troubleshoot.md)
+- Learn to [Diagnose and troubleshoot high CPU on Azure SQL Database](high-cpu-diagnose-troubleshoot.md)
+- [Tune nonclustered indexes with missing index suggestions](/sql/relational-databases/indexes/tune-nonclustered-missing-index-suggestions)
azure-sql Identify Query Performance Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/identify-query-performance-issues.md
Title: Types of query performance issues in Azure SQL Database
-description: In this article, learn about types of query performance issues in Azure SQL Database and also learn how to identify and resolve queries with these issues
+ Title: Types of query performance issues
+
+description: Learn about types of query performance issues in Azure SQL Database and Azure SQL Managed Instance, and how to identify and resolve queries with these issues.
Previously updated : 11/04/2021 Last updated : 03/18/2022
-# Detectable types of query performance bottlenecks in Azure SQL Database
+# Detectable types of query performance bottlenecks in Azure SQL Database and Azure SQL Managed Instance
[!INCLUDE[appliesto-sqldb-sqlmi](includes/appliesto-sqldb-sqlmi.md)] When trying to resolve a performance bottleneck, start by determining whether the bottleneck is occurring while the query is in a running state or a waiting state. Different resolutions apply depending upon this determination. Use the following diagram to help understand the factors that can cause either a running-related problem or a waiting-related problem. Problems and resolutions relating to each type of problem are discussed in this article.
-You can use Azure SQL Database [Intelligent Insights](database/intelligent-insights-troubleshoot-performance.md#detectable-database-performance-patterns) or SQL Server [DMVs](database/monitoring-with-dmvs.md) to detect these types of performance bottlenecks.
+You can use [Intelligent Insights](database/intelligent-insights-troubleshoot-performance.md#detectable-database-performance-patterns) or SQL Server [DMVs](database/monitoring-with-dmvs.md) to detect these types of performance bottlenecks.
![Workload states](./media/identify-query-performance-issues/workload-states.png)
You can use Azure SQL Database [Intelligent Insights](database/intelligent-insig
- Locks (blocking) - I/O-- Contention related to TempDB usage
+- Contention related to tempdb usage
- Memory grant waits ## Compilation problems resulting in a suboptimal query plan
A suboptimal plan generated by the SQL Query Optimizer may be the cause of slow
- Identify any missing indexes using one of these methods: - Use [Intelligent Insights](database/intelligent-insights-troubleshoot-performance.md#missing-index).
- - [Database Advisor](database/database-advisor-implement-performance-recommendations.md) for single and pooled databases.
- - DMVs. This example shows you the impact of a missing index, how to detect a [missing indexes](database/performance-guidance.md#identifying-and-adding-missing-indexes) using DMVs, and the impact of implementing the missing index recommendation.
-- Try to apply [query hints](/sql/t-sql/queries/hints-transact-sql-query), [update statistics](/sql/t-sql/statements/update-statistics-transact-sql), or [rebuild indexes](/sql/relational-databases/indexes/reorganize-and-rebuild-indexes) to get the better plan. Enable [automatic plan correction](../azure-sql/database/automatic-tuning-overview.md) in Azure SQL Database to automatically mitigate these problems.
+ - Review recommendations in the [Database Advisor](database/database-advisor-implement-performance-recommendations.md) for single and pooled databases in Azure SQL Database. You may also choose to enable [automatic tuning options for tuning indexes](database/automatic-tuning-overview.md#automatic-tuning-options) for Azure SQL Database.
+ - Missing indexes in DMVs and query execution plans. This article shows you how to [detect and tune nonclustered indexes using missing index requests](/sql/relational-databases/indexes/tune-nonclustered-missing-index-suggestions).
+- Try to [update statistics](/sql/t-sql/statements/update-statistics-transact-sql) or [rebuild indexes](/sql/relational-databases/indexes/reorganize-and-rebuild-indexes) to get the better plan. Enable [automatic plan correction](../azure-sql/database/automatic-tuning-overview.md) in Azure SQL Database or Azure SQL Managed Instance to automatically mitigate these problems.
+- As an advanced troubleshooting step, use [Query Store hints](/sql/relational-databases/performance/query-store-hints) to apply [query hints](/sql/t-sql/queries/hints-transact-sql-query) using the Query Store, without making code changes.
This [example](database/performance-guidance.md#query-tuning-and-hinting) shows the impact of a suboptimal query plan due to a parameterized query, how to detect this condition, and how to use a query hint to resolve.
Once you have eliminated a suboptimal plan and *Waiting-related* problems that a
- **IO problems** Queries might be waiting for the pages to be written to the data or log files. In this case, check the `INSTANCE_LOG_RATE_GOVERNOR`, `WRITE_LOG`, or `PAGEIOLATCH_*` wait statistics in the DMV. See using DMVs to [identify IO performance issues](database/monitoring-with-dmvs.md#identify-io-performance-issues).-- **TempDB problems**
+- **Tempdb problems**
- If the workload uses temporary tables or there are TempDB spills in the plans, the queries might have a problem with TempDB throughput. See using DMVs to [identity TempDB issues](database/monitoring-with-dmvs.md#identify-tempdb-performance-issues).
+ If the workload uses temporary tables or there are `tempdb` spills in the plans, the queries might have a problem with `tempdb` throughput. To investigate further, review [identify tempdb issues](database/monitoring-with-dmvs.md#identify-tempdb-performance-issues).
- **Memory-related problems** If the workload doesn't have enough memory, the page life expectancy might drop, or the queries might get less memory than they need. In some cases, built-in intelligence in Query Optimizer will fix memory-related problems. See using DMVs to [identify memory grant issues](database/monitoring-with-dmvs.md#identify-memory-grant-wait-performance-issues). For more information and sample queries, see [Troubleshoot out of memory errors with Azure SQL Database](database/troubleshoot-memory-errors-issues.md).
DMVs that track Query Store and wait statistics show results for only successful
- [Diagnose and troubleshoot high CPU on Azure SQL Database](database/high-cpu-diagnose-troubleshoot.md) - [SQL Database monitoring and tuning overview](database/monitor-tune-overview.md) - [Monitoring Microsoft Azure SQL Database and Azure SQL Managed Instance performance using dynamic management views](database/monitoring-with-dmvs.md)
+- [Tune nonclustered indexes with missing index suggestions](/sql/relational-databases/indexes/tune-nonclustered-missing-index-suggestions)
azure-sql Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/backup-restore.md
For more information on how to configure Automated Backup for SQL VMs, see one o
- **Consolidated email alerts for failures**: Configure consolidated email notifications for any failures. - **Azure role-based access control**: Determine who can manage backup and restore operations through the portal.
-For a quick overview of how it works along with a demo, watch the following video:
-
-> [!VIDEO https://www.youtube.com/embed/wmbANpHos_E]
- This Azure Backup solution for SQL VMs is generally available. For more information, see [Back up SQL Server database to Azure](../../../backup/backup-azure-sql-database.md). ## <a id="manual"></a> Manual backup
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md
Previously updated : 03/03/2022 Last updated : 03/17/2022
-# Connect to a VM using a native client (Preview)
+# Connect to a VM using a native client
This article helps you configure your Bastion deployment, and then connect to a VM in the VNet using the native client (SSH or RDP) on your local computer. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). Additionally with this feature, you can now also upload or download files, depending on the connection type and client.
There are two different sets of connection instructions.
* Set up concurrent VM sessions with Bastion. * [Upload files](vm-upload-download-native.md#tunnel-command) to your target VM from your local computer. File download from the target VM to the local client is currently not supported for this command.
-**Preview limitations**
- Currently, this feature has the following limitation: * Signing in using an SSH private key stored in Azure Key Vault isnΓÇÖt supported with this feature. Before signing in to your Linux VM using an SSH key pair, download your private key to a file on your local machine.
Use the example that corresponds to the type of target VM to which you want to c
``` **SSH:**-
- The SSH CLI extension is currently in Preview. The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example.
+
+ The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example.
```azurecli az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
Use the example that corresponds to the type of target VM to which you want to c
**SSH:**
- The SSH CLI extension is currently in Preview. The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example.
+ The extension can be installed by running, ```az extension add --name ssh```. To sign in using an SSH key pair, use the following example.
```azurecli az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>"
bastion Vm Upload Download Native https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-upload-download-native.md
Previously updated : 03/07/2022 Last updated : 03/17/2022 # Customer intent: I want to upload or download files using Bastion.
-# File upload and download to a VM using a native client (Preview)
+# File upload and download to a VM using a native client
Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or native SSH client. To learn more about native client support, refer to [Connect to a VM using the native client](connect-native-client-windows.md). While it may be possible to use third-party clients and tools to upload or download files, this article focuses on working with supported native clients.
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
keywords: speech to text, speech to text software
In this overview, you learn about the benefits and capabilities of the speech-to-text feature of the Speech service, which is part of Azure Cognitive Services.
-Speech-to-text, also known as speech recognition, enables real-time transcription of audio streams into text. Your applications, tools, or devices can consume, display, and take action on this text as command input.
+Speech-to-text, also known as speech recognition, enables real-time transcription of audio streams into text. Your applications, tools, or devices can consume, display, and take action on this text as command input.
-This feature uses the same recognition technology that Microsoft uses for Cortana and Office products. It seamlessly works with the <a href="./speech-translation.md" target="_blank">translation </a> and <a href="./text-to-speech.md" target="_blank">text-to-speech </a> offerings of the Speech service. For a full list of available speech-to-text languages, see [Language and voice support for the Speech service](language-support.md#speech-to-text).
+This feature uses the same recognition technology that Microsoft uses for Cortana and Office products. It seamlessly works with the <a href="./speech-translation.md">translation </a> and <a href="./text-to-speech.md">text-to-speech </a> offerings of the Speech service. For a full list of available speech-to-text languages, see [Language and voice support for the Speech service](language-support.md#speech-to-text).
-The speech-to-text feature defaults to using the Universal Language Model. This model was trained through Microsoft-owned data and is deployed in the cloud. It's optimal for conversational and dictation scenarios.
+The speech-to-text feature defaults to using the Universal Language Model. This model was trained through Microsoft-owned data and is deployed in the cloud. It's optimal for conversational and dictation scenarios.
When you're using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models. Customization is helpful for addressing ambient noise or industry-specific vocabulary.
-> [!NOTE]
-> Bing Speech was decommissioned on October 15, 2019. If your applications, tools, or products are using the Bing Speech APIs, see [Migrate from Bing Speech to the Speech service](how-to-migrate-from-bing-speech.md).
## Get started
Sample code for the Speech SDK is available on GitHub. These samples cover commo
## Customization
-In addition to the standard Speech service model, you can create custom models. Customization helps to overcome speech recognition barriers such as speaking style, vocabulary, and background noise. For more information, see [Custom Speech](./custom-speech-overview.md).
+In addition to the standard Speech service model, you can create custom models. Customization helps to overcome speech recognition barriers such as speaking style, vocabulary, and background noise. For more information, see [Custom Speech](./custom-speech-overview.md).
Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md).
The [Speech SDK](speech-sdk.md) provides most of the functionalities that you ne
Use the following list to find the appropriate Speech SDK reference docs: -- <a href="https://aka.ms/csspeech/csharpref" target="_blank" rel="noopener">C# SDK </a>-- <a href="https://aka.ms/csspeech/cppref" target="_blank" rel="noopener">C++ SDK </a>-- <a href="https://aka.ms/csspeech/javaref" target="_blank" rel="noopener">Java SDK </a>-- <a href="https://aka.ms/csspeech/pythonref" target="_blank" rel="noopener">Python SDK</a>-- <a href="https://aka.ms/csspeech/javascriptref" target="_blank" rel="noopener">JavaScript SDK</a>-- <a href="https://aka.ms/csspeech/objectivecref" target="_blank" rel="noopener">Objective-C SDK </a>
+- <a href="https://aka.ms/csspeech/csharpref">C# SDK </a>
+- <a href="https://aka.ms/csspeech/cppref">C++ SDK </a>
+- <a href="https://aka.ms/csspeech/javaref">Java SDK </a>
+- <a href="https://aka.ms/csspeech/pythonref">Python SDK</a>
+- <a href="https://aka.ms/csspeech/javascriptref">JavaScript SDK</a>
+- <a href="https://aka.ms/csspeech/objectivecref">Objective-C SDK </a>
> [!TIP] > The Speech service SDK is actively maintained and updated. To track changes, updates, and feature additions, see the [Speech SDK release notes](releasenotes.md).
For speech-to-text REST APIs, see the following resources:
- [REST API: Speech-to-text](rest-speech-to-text.md) - [REST API: Pronunciation assessment](rest-speech-to-text.md#pronunciation-assessment-parameters)-- <a href="https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0" target="_blank" rel="noopener">REST API: Batch transcription and customization </a>
+- <a href="https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0">REST API: Batch transcription and customization </a>
## Next steps
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
keywords: text to speech
# What is text-to-speech?
-In this overview, you learn about the benefits and capabilities of the text-to-speech feature of the Speech service, which is part of Azure Cognitive Services.
+In this overview, you learn about the benefits and capabilities of the text-to-speech feature of the Speech service, which is part of Azure Cognitive Services.
Text-to-speech enables your applications, tools, or devices to convert text into humanlike synthesized speech. The text-to-speech capability is also known as speech synthesis. Use humanlike prebuilt neural voices out of the box, or create a custom neural voice that's unique to your product or brand. For a full list of supported voices, languages, and locales, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
-> [!NOTE]
->
-> Bing Speech was decommissioned on October 15, 2019. If your applications, tools, or products are using the Bing Speech APIs or Custom Speech, see [Migrate from Bing Speech to the Speech service](how-to-migrate-from-bing-speech.md).
- ## Core features
-Text-to-speech includes the following features:
+Text-to-speech includes the following features:
-| Feature| Summary | Demo |
-|--|-||
+| Feature | Summary | Demo |
+| | | |
| Prebuilt neural voice (called *Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Highly natural out-of-the-box voices. Create an Azure account and Speech service subscription, and then use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select prebuilt neural voices to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [voice samples](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) and determine the right voice for your business needs. | | Custom neural voice (called *Custom Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Easy-to-use self-service for creating a natural brand voice, with limited access for responsible use. Create an Azure account and Speech service subscription (with the S0 tier), and [apply](https://aka.ms/customneural) to use the custom neural feature. After you've been granted access, visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select **Custom Voice** to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [voice samples](https://aka.ms/customvoice). | ### More about neural text-to-speech features+ The text-to-speech feature of the Speech service on Azure has been fully upgraded to the neural text-to-speech engine. This engine uses deep neural networks to make the voices of computers nearly indistinguishable from the recordings of people. With the clear articulation of words, neural text-to-speech significantly reduces listening fatigue when users interact with AI systems.
-The patterns of stress and intonation in spoken language are called _prosody_. Traditional text-to-speech systems break down prosody into separate linguistic analysis and acoustic prediction steps that are governed by independent models. That can result in muffled, buzzy voice synthesis.
+The patterns of stress and intonation in spoken language are called _prosody_. Traditional text-to-speech systems break down prosody into separate linguistic analysis and acoustic prediction steps that are governed by independent models. That can result in muffled, buzzy voice synthesis.
Here's more information about neural text-to-speech features in the Speech service, and how they overcome the limits of traditional text-to-speech systems:
Here's more information about neural text-to-speech features in the Speech servi
- Make interactions with chatbots and voice assistants more natural and engaging. - Convert digital texts such as e-books into audiobooks.
- - Enhance in-car navigation systems.
-
+ - Enhance in-car navigation systems.
+ For a full list of platform neural voices, see [Language and voice support for the Speech service](language-support.md#text-to-speech).
-* **Fine-tuning text-to-speech output with SSML**: Speech Synthesis Markup Language (SSML) is an XML-based markup language that's used to customize text-to-speech outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document.
+* **Fine-tuning text-to-speech output with SSML**: Speech Synthesis Markup Language (SSML) is an XML-based markup language that's used to customize text-to-speech outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document.
You can use SSML to define your own lexicons or switch to different speaking styles. With the [multilingual voices](https://techcommunity.microsoft.com/t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981), you can also adjust the speaking languages via SSML. To fine-tune the voice output for your scenario, see [Improve synthesis with Speech Synthesis Markup Language](speech-synthesis-markup.md).
-* **Visemes**: [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw, and tongue in producing a particular phoneme. Visemes have a strong correlation with voices and phonemes.
+* **Visemes**: [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw, and tongue in producing a particular phoneme. Visemes have a strong correlation with voices and phonemes.
By using viseme events in Speech SDK, you can generate facial animation data. This data can be used to animate faces in lip-reading communication, education, entertainment, and customer service. Viseme is currently supported only for the `en-US` (US English) [neural voices](language-support.md#text-to-speech). > [!NOTE] > We plan to retire the traditional/standard voices and non-neural custom voice in 2024. After that, we'll no longer support them.
->
-> If your applications, tools, or products are using any of the standard voices and custom voices, we've created guides to help you migrate to the neural version. For more information, see [Migrate to neural voices](migration-overview-neural-voice.md).
+>
+> If your applications, tools, or products are using any of the standard voices and custom voices, you must migrate to the neural version. For more information, see [Migrate to neural voices](migration-overview-neural-voice.md).
## Get started
cognitive-services Cognitive Services Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-data-loss-prevention.md
The following services support data loss prevention configuration:
- Content Moderator - Custom Vision - Face
+- Form Recognizer
- Speech Service - QnA Maker
cognitive-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/entity-components.md
The prebuilt component allows you to select from a library of common types such
## Overlap methods
-When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined determined by one of the following options.
+When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
### Longest overlap
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/overview.md
Previously updated : 03/03/2022 Last updated : 03/18/2022
The API is a part of [Azure Cognitive Services](../../index.yml), a collection o
## Reference documentation and code samples
-As you use text summarization in your applications, see the following reference documentation and samples for Azure Cognitive Services for Language:
+As you use orchestration workflow in your applications, see the following reference documentation and samples for Azure Cognitive Services for Language:
|Development option / language |Reference documentation |Samples | ||||
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
Azure Cognitive Service for Language provides the following features:
> | [Text Summarization (preview)](text-summarization/overview.md) | This pre-configured feature extracts key sentences that collectively convey the essence of a document. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](text-summarization/quickstart.md) | > | [Conversational language understanding (preview)](conversational-language-understanding/overview.md) | Build an AI model to bring the ability to understand natural language into apps, bots, and IoT devices. | * [Language Studio](conversational-language-understanding/quickstart.md) > | [Question answering](question-answering/overview.md) | This pre-configured feature provides answers to questions extracted from text input, using semi-structured content such as: FAQs, manuals, and documents. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](question-answering/quickstart/sdk.md) |
+> | [Orchestration workflow](orchestration-workflow/overview.md) | Train language models to connect your applications to question answering, conversational language understanding, and LUIS | * [Language Studio](orchestration-workflow/quickstart.md?pivots=language-studio) <br> * [REST API](orchestration-workflow/quickstart.md?pivots=rest-api) |
+ ## Tutorials
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
The media traffic flows via components called media processors. Media processors
## Media traffic: Codecs
-### Leg between SBC and Cloud Media Processor or Microsoft Teams client.
+### Leg between SBC and Cloud Media Processor.
The Azure direct routing interface on the leg between the Session Border Controller and Cloud Media Processor can use the following codecs:
container-instances Container Instances Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-dedicated-hosts.md
Title: Deploy on dedicated host description: Use a dedicated host to achieve true host-level isolation for your Azure Container Instances workloads- Previously updated : 01/17/2020--+++++ Last updated : 03/18/2022 # Deploy on dedicated hosts
The dedicated sku is appropriate for container workloads that require workload isolation from a physical server perspective.
+A dedicated host for Azure Container Instances provides [double encryption at rest](../virtual-machines/windows/disk-encryption.md#double-encryption-at-rest) for your container data when it is persisted by the service to the cloud. This encryption protects your data to help meet your organization's security and compliance requirements. ACI also gives you the option to [encrypt this data with your own key](container-instances-encrypt-data.md), giving you greater control over the data related to your ACI deployments.
+ ## Prerequisites > [!NOTE]
The dedicated sku is appropriate for container workloads that require workload i
## Use the dedicated sku > [!IMPORTANT]
-> Using the dedicated sku is only available in the latest API version (2019-12-01) that is currently rolling out. Specify this API version in your deployment template.
->
+> Using the dedicated sku is only available in **API version 2019-12-01 or later**. Specify this API version or a more recent one in your deployment template.
Starting with API version 2019-12-01, there is a `sku` property under the container group properties section of a deployment template, which is required for an ACI deployment. Currently, you can use this property as part of an Azure Resource Manager deployment template for ACI. Learn more about deploying ACI resources with a template in the [Tutorial: Deploy a multi-container group using a Resource Manager template](./container-instances-multi-container-group.md).
container-instances Container Instances Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-bicep.md
+
+ Title: Quickstart - create a container instance - Bicep
+description: In this quickstart, you use a Bicep file to quickly deploy a containerized web app that runs in an isolated Azure container instance.
+++ Last updated : 03/10/2022+++++
+# Quickstart: Deploy a container instance in Azure using Bicep
+
+Use Azure Container Instances to run serverless Docker containers in Azure with simplicity and speed. Deploy an application to a container instance on-demand when you don't need a full container orchestration platform like Azure Kubernetes Service. In this quickstart, you use a Bicep file to deploy an isolated Docker container and make its web application available with a public IP address.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/aci-linuxcontainer-public-ip/).
++
+The following resource is defined in the Bicep file:
+
+* **[Microsoft.ContainerInstance/containerGroups](/azure/templates/microsoft.containerinstance/containergroups)**: create an Azure container group. This Bicep file defines a group consisting of a single container instance.
+
+More Azure Container Instances template samples can be found in the [quickstart template gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Containerinstance&pageNumber=1&sort=Popular).
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+### View container logs
+
+Viewing the logs for a container instance is helpful when troubleshooting issues with your container or the application it runs. Use the Azure portal, Azure CLI, or Azure PowerShell to view the container's logs.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az container logs --resource-group exampleRG --name acilinuxpublicipcontainergroup
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzContainerInstanceLog -ResourceGroupName exampleRG -ContainerGroupName acilinuxpublicipcontainergroup -ContainerName acilinuxpublicipcontainergroup
+```
+++
+> [!NOTE]
+> It may take a few minutes for the HTTP GET request to generate.
+
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the container and all of the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created an Azure container instance using Bicep. If you'd like to build a container image and deploy it from a private Azure container registry, continue to the Azure Container Instances tutorial.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Create a container image for deployment to Azure Container Instances](./container-instances-tutorial-prepare-app.md)
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
If you are migrating from other databases such as Oracle, DynamoDB, HBase etc. a
This API stores data in a document structure, via BSON format. It is compatible with MongoDB wire protocol; however, it does not use any native MongoDB related code. This API is a great choice if you want to use the broader MongoDB ecosystem and skills, without compromising on using Azure Cosmos DB features such as scaling, high availability, geo-replication, multiple write locations, automatic and transparent shard management, transparent replication between operational and analytical stores, and more.
-You can use your existing MongoDB apps with API for MongoDB by just changing the connection string. You can move any existing data using native MongoDB tools such as mongodump & mongorestore or using our Azure Database Migration tool. Tools, such as the MongoDB shell, [MongoDB Compass](mongodb/connect-using-compass.md), and [Robo3T](mongodb/connect-using-robomongo.md), can run queries and work with data as they do with native MongoDB.
-
-API for MongoDB is compatible with the 4.0, 3.6, and 3.2 MongoDB server versions. Server version 4.0 is recommended as it offers the best performance and full feature support. To learn more, see [API for MongoDB](mongodb/mongodb-introduction.md) article.
+You can use your existing MongoDB apps with API for MongoDB by just changing the connection string. You can move any existing data using native MongoDB tools such as mongodump & mongorestore or using our Azure Database Migration tool. Tools, such as the MongoDB shell, [MongoDB Compass](mongodb/connect-using-compass.md), and [Robo3T](mongodb/connect-using-robomongo.md), can run queries and work with data as they do with native MongoDB. To learn more, see [API for MongoDB](mongodb/mongodb-introduction.md) article.
## Cassandra API
data-factory How To Send Notifications To Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-send-notifications-to-teams.md
a data factory or Synapse pipeline can invoke. Many enterprises are also increa
## Prerequisites
-Before you can send notifications to Teams from your pipelines you must create an [Incoming Webhook](/microsoftteams/platform/webhooks-and-connectors/how-to/connectors-using) for your Teams channel. If you need to create a new Teams channel for this purpose, refer to the [Teams documentation](https://support.microsoft.com/office/create-a-channel-in-teams-fda0b75e-5b90-4fb8-8857-7e102b014525).  
+Before you can send notifications to Teams from your pipelines, you must create an [Incoming Webhook](/microsoftteams/platform/webhooks-and-connectors/how-to/connectors-using) for your Teams channel. If you need to create a new Teams channel for this purpose, refer to the [Teams documentation](https://support.microsoft.com/office/create-a-channel-in-teams-fda0b75e-5b90-4fb8-8857-7e102b014525).  
1. Open Microsoft Teams and go to the Apps tab. Search for "Incoming Webhook" and select the Incoming Webhook connector. :::image type="content" source="media/how-to-send-notifications-to-teams/teams-incoming-webhook-connector.png" alt-text="Shows the Incoming Webhook app under the Apps tab in Teams.":::
-1. Add the connector to the Teams site where you want to send the notifications.
+1. Select the "Add to a team" button to add the connector to the Team or Team channel name site where you want to send notifications.
:::image type="content" source="media/how-to-send-notifications-to-teams/teams-add-connector-to-site.png" alt-text="Highlights the &quot;Add to a team&quot; button for the Incoming Webhook app.":::
-
- :::image type="content" source="media/how-to-send-notifications-to-teams/teams-prod-notifications.png" alt-text="Shows the team selection prompt on the Incoming Webhook app configuration dialog in Teams.":::
+
+1. Type or select Team or Team channel name where you want to send the notifications.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/type-a-team-or-team-channel-name.png" alt-text="Shows the team selection prompt on the Incoming Webhook app configuration dialog in Teams. Type the &quot;Team or Team channel name&quot;":::
+
+1. Select the "Set up a connector" button to set up the Incoming Webhook for the Team or Team channel name you selected in the previous step.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/teams-prod-notifications.png" alt-text="Shows the team selection prompt on the Incoming Webhook app configuration dialog in Teams. Highlights the Team and the &quot;Set up a connector&quot; button":::
1. Name the Webhook as appropriate and optionally upload an icon to identify your messages. :::image type="content" source="media/how-to-send-notifications-to-teams/teams-add-icon.png" alt-text="Highlights the name property, optional image upload, and &quot;Create&quot; button in the Incoming Webhook options page.":::  
-1. Copy the store of the webhook URL that is generated on creation for later use in ADF.
+1. Copy the Webhook URL that is generated on creation and save it for later use in pipeline. After that, select the "Done" button to complete the setup.
:::image type="content" source="media/how-to-send-notifications-to-teams/teams-copy-webhook-url.png" alt-text="Shows the new webhook URL on the Incoming Webhook options page after creation.":::
Before you can send notifications to Teams from your pipelines you must create a
# [Azure Data Factory](#tab/data-factory)
-1. Create a new **Pipeline from template**. The template gallery provides a pipeline template that makes it easy to get started with teams notifications.
-
- :::image type="content" source="media/how-to-send-notifications-to-teams/pipeline-from-template.png" alt-text="Shows the &quot;Pipeline from template&quot; menu in the Azure Data Factory Studio.":::
-1. Search for "teams", then select and use the **Send notification to a channel in Microsoft Teams** template.
-
- :::image type="content" source="media/how-to-send-notifications-to-teams/send-notification-dialog.png" alt-text="Shows the &quot;Send notification to a channel in Microsoft Teams&quot; template in the template gallery.":::
-
- :::image type="content" source="media/how-to-send-notifications-to-teams/send-notification-template.png" alt-text="Shows the &quot;Send notification to a channel in Microsoft Teams&quot; template details after it is selected in the template gallery.":::
-
- :::image type="content" source="media/how-to-send-notifications-to-teams/teams-webhook-properties.png" alt-text="Shows the properties of the pipeline created by the &quot;Send notification to a channel in Microsoft Teams&quot; template.":::
+1. Select **Author** tab from the left pane.
+
+1. Select the + (plus) button, and then select **New pipeline**.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/new-pipeline.png" alt-text="Shows the &quot;New pipeline&quot; menu in the Azure Data Factory Studio.":::
+
+1. In the "Properties" pane under "General", specify **NotifiyTeamsChannelPipeline** for **Name**. Then collapse the panel by clicking the **Properties** icon in the top-right corner.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/name-pipeline.png" alt-text="Shows the &quot;Properties&quot; panel.":::
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/hide-properties-panel.png" alt-text="Shows the &quot;Properties&quot; panel hidden.":::
+
+1. In the "Configurations" pane, select **Parameters**, and then select the **+ New** button define following parameters for your pipeline.
+
+ | Name | Type | Default Value |
+ | :-- | : |: |
+ | subscription | ``String`` | ``Specify subscription id for the pipeline`` |
+ | resourceGroup | ``String`` | ``Specify resource group name for the pipeline`` |
+ | runId | ``String`` | ``@activity('Specify name of the calling pipeline').output['pipelineRunId']`` |
+ | name | ``String`` | ``@activity('Specify name of the calling pipeline').output['pipelineName']`` |
+ | triggerTime | ``String`` | ``@activity('Specify name of the calling pipeline').ExecutionStartTime`` |
+ | status | ``String`` | ``@activity('Specify name of the calling pipeline').Status`` |
+ | message | ``String`` | ``@activity('Specify name of the calling pipeline').Error['message']`` |
+ | executionEndTime | ``String`` | ``@activity('Specify name of the calling pipeline').ExecutionEndTime`` |
+ | runDuration | ``String`` | ``@activity('Specify name of the calling pipeline').Duration`` |
+ | teamWebhookUrl | ``String`` | ``Specify Team Webhook URL`` |
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/pipeline-parameters.png" alt-text="Shows the &quot;Pipeline parameters&quot;.":::
+
+ > [!NOTE]
+ > These parameters are used to construct the monitoring URL. Suppose you do not provide a valid subscription and resource group (of the same data factory where the pipelines belong). In that case, the notification will not contain a valid pipeline monitoring URL, but the messages will still work. Additionally, adding these parameters helps prevent the need to always pass those values from another pipeline. If you intend to control those values through a metadata-driven approach, then you should modify them accordingly.
+
+ > [!TIP]
+ > We recommend adding the current Data Factory **Subscription ID**, **Resource Group**, and the **Teams webhook URL** (refer to [prerequisites](#prerequisites)) for the default value of the relevant parameters.
+
+1. In the "Configurations" pane, select **Variables**, and then select the **+ New** button define following variables for your pipeline.
+
+ | Name | Type | Default Value |
+ | :-- | : |:- |
+ | messageCard | ``String`` | |
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/pipeline-variables.png" alt-text="Shows the &quot;Pipeline variables&quot;.":::
+
+1. Search for "Set variable" in the pipeline "Activities" pane, and drag a **Set Variable** activity to the pipeline canvas.
+
+1. Select the **Set Variable** activity on the canvas if it isn't already selected, and its "General" tab, to edit its details.
+
+1. In the "General" tab, specify **Set JSON schema** for **Name** of the **Set Variable** activity.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/set-variable-activity-name.png" alt-text="Shows the &quot;Set variable&quot; activity general tab.":::
+
+1. In the "Variables" tab, select **messageCard** variable for the **Name** property and enter the following **JSON** for its **Value** property:
+
+ ```json
+ {
+ "@type": "MessageCard",
+ "@context": "http://schema.org/extensions",
+ "themeColor": "0076D7",
+ "summary": "Pipeline status alert messageΓÇïΓÇïΓÇïΓÇï",
+ "sections": [
+ {
+ "activityTitle": "Pipeline execution alertΓÇïΓÇïΓÇïΓÇï",
+ "facts": [
+ {
+ "name": "Subscription Id:",
+ "value": "@{pipeline().parameters.subscription}"
+ },
+ {
+ "name": "Resource Group:",
+ "value": "@{pipeline().parameters.resourceGroup}"
+ },
+ {
+ "name": "Data Factory Name:",
+ "value": "@{pipeline().DataFactory}"
+ },
+ {
+ "name": "Pipeline RunId:",
+ "value": "@{pipeline().parameters.runId}"
+ },
+ {
+ "name": "Pipline Name:",
+ "value": "@{pipeline().parameters.name}"
+ },
+ {
+ "name": "Pipeline Status:",
+ "value": "@{pipeline().parameters.status}"
+ },
+ {
+ "name": "Execution Start Time (UTC):",
+ "value": "@{pipeline().parameters.triggerTime}"
+ },
+ {
+ "name": "Execution Finish Time (UTC):",
+ "value": "@{pipeline().parameters.executionEndTime}"
+ },
+ {
+ "name": "Execution Duration (s):",
+ "value": "@{pipeline().parameters.runDuration}"
+ },
+ {
+ "name": "Message:",
+ "value": "@{pipeline().parameters.message}"
+ },
+ {
+ "name": "Notification Time (UTC):",
+ "value": "@{utcnow()}"
+ }
+ ],
+ "markdown": true
+ }
+ ],
+ "potentialAction": [
+ {
+ "@type": "OpenUri",
+ "name": "View pipeline run",
+ "targets": [
+ {
+ "os": "default",
+ "uri": "@{concat('https://adf.azure.com/monitoring/pipelineruns/',pipeline().parameters.runId,'?factory=/subscriptions/',pipeline().parameters.subscription,'/resourceGroups/',pipeline().parameters.resourceGroup,'/providers/Microsoft.DataFactory/factories/',pipeline().DataFactory)}"
+ }
+ ]
+ }
+ ]
+ }
+ ```
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/set-variable-activity-variables-tab.png" alt-text="Shows the &quot;Set variable&quot; activity variables tab.":::
+
+1. Search for "Web" in the pipeline "Activities" pane, and drag a **Web** activity to the pipeline canvas.
+
+1. Create a dependency condition for the **Web** activity so that it only runs if the **Set Variable** activity succeeds. To create this dependency, select the green handle on the right side of the **Set Variable** activity, drag it, and connect it to the **Web** activity.
+
+1. Select the new **Web** activity on the canvas if it isn't already selected, and its "General" tab, to edit its details.
+
+1. In the "General" pane, specify **Invoke Teams Webhook Url** for **Name** of the **Web** activity.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/web-activity-name.png" alt-text="Shows the &quot;Web&quot; activity general pane.":::
+
+1. In the "Settings" pane, set following properties as follows:
+
+ | Property | value |
+ | :-- | : |
+ | URL | ``@pipeline().parameters.teamWebhookUrl`` |
+ | Method | ``POST`` |
+ | Body | ``@json(variables('messageCard'))`` |
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/web-activity-settings-pane.png" alt-text="Shows the &quot;Web&quot; activity settings pane.":::
+
+1. All set and now you're ready to validate, debug, and then publish your **NotifiyTeamsChannelPipeline** pipeline.
+ - To validate the pipeline, select **Validate** from the tool bar.
+ - To debug the pipeline, select **Debug** on the toolbar. You can see the status of the pipeline run in the "Output" tab at the bottom of the window.
+ - Once the pipeline can run successfully, in the top toolbar, select **Publish all**. This action publishes entities you created to Data Factory. Wait until you see the **Successfully** published message.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/validate-debug-publish.png" alt-text="Shows the &quot;Validate, Debug, Publish&quot; buttons to validate, debug, and then publish your pipeline.":::
# [Synapse Analytics](#tab/synapse-analytics)
-1. Create a new **Pipeline from template**. The template gallery provides a pipeline template that makes it easy to get started with teams notifications.
-
- :::image type="content" source="media/how-to-send-notifications-to-teams/pipeline-from-template-synapse.png" alt-text="Shows the &quot;Pipeline from template&quot; menu in the Azure Data Factory Studio.":::
-1. Search for "teams", then select and use the **Send notification to a channel in Microsoft Teams** template.
-
- :::image type="content" source="media/how-to-send-notifications-to-teams/send-notification-dialog-synapse.png" alt-text="Shows the &quot;Send notification to a channel in Microsoft Teams&quot; template in the template gallery.":::
-
- :::image type="content" source="media/how-to-send-notifications-to-teams/send-notification-template-synapse.png" alt-text="Shows the &quot;Send notification to a channel in Microsoft Teams&quot; template details after it is selected in the template gallery.":::
-
- :::image type="content" source="media/how-to-send-notifications-to-teams/teams-webhook-properties.png" alt-text="Shows the properties of the pipeline created by the &quot;Send notification to a channel in Microsoft Teams&quot; template.":::
+1. Select **Integrate** tab from the left pane.
+
+1. Select the + (plus) button, and then select **Pipeline**.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/new-pipeline-synapse.png" alt-text="Shows the &quot;New pipeline&quot; menu in the Synapse Studio.":::
+
+1. In the "Properties" panel under "General", specify **NotifiyTeamsChannelPipeline** for **Name**. Then collapse the panel by clicking the **Properties** icon in the top-right corner.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/name-pipeline-synapse.png" alt-text="Shows the &quot;Properties&quot; panel.":::
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/hide-properties-panel-synapse.png" alt-text="Shows the &quot;Properties&quot; panel hidden.":::
+
+1. In the "Configurations" pane, select **Parameters**, and then select the **+ New** button define following parameters for your pipeline.
+
+ | Name | Type | Default Value |
+ | :-- | : |:- |
+ | subscription | ``String`` | ``Specify subscription id for the pipeline`` |
+ | resourceGroup | ``String`` | ``Specify resource group name for the pipeline`` |
+ | runId | ``String`` | ``@activity('Specify name of the calling pipeline').output['pipelineRunId']`` |
+ | name | ``String`` | ``@activity('Specify name of the calling pipeline').output['pipelineName']`` |
+ | triggerTime | ``String`` | ``@activity('Specify name of the calling pipeline').ExecutionStartTime`` |
+ | status | ``String`` | ``@activity('Specify name of the calling pipeline').Status`` |
+ | message | ``String`` | ``@activity('Specify name of the calling pipeline').Error['message']`` |
+ | executionEndTime | ``String`` | ``@activity('Specify name of the calling pipeline').ExecutionEndTime`` |
+ | runDuration | ``String`` | ``@activity('Specify name of the calling pipeline').Duration`` |
+ | teamWebhookUrl | ``String`` | ``Specify Team Webhook URL`` |
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/pipeline-parameters-synapse.png" alt-text="Shows the &quot;Pipeline parameters&quot;.":::
+
+ > [!NOTE]
+ > These parameters are used to construct the monitoring URL. Suppose you do not provide a valid subscription and resource group (of the same data factory where the pipelines belong). In that case, the notification will not contain a valid pipeline monitoring URL, but the messages will still work. Additionally, adding these parameters helps prevent the need to always pass those values from another pipeline. If you intend to control those values through a metadata-driven approach, then you should modify them accordingly.
+
+ > [!TIP]
+ > We recommend adding the current Data Factory **Subscription ID**, **Resource Group**, and the **Teams webhook URL** (refer to [prerequisites](#prerequisites)) for the default value of the relevant parameters.
+
+1. In the "Configurations" pane, select **Variables**, and then select the **+ New** button define following variables for your pipeline.
+
+ | Name | Type | Default Value |
+ | :-- | : |:- |
+ | messageCard | ``String`` | |
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/pipeline-variables-synapse.png" alt-text="Shows the &quot;Pipeline variables&quot;.":::
+
+1. Search for "Set variable" in the pipeline "Activities" pane, and drag a **Set Variable** activity to the pipeline canvas.
+
+1. Select the "Set variable" activity on the canvas if it isn't already selected, and its **General** tab, to edit its details.
+
+1. In the "General" tab, specify **Set JSON schema** for **Name** of the "Set variable" activity.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/set-variable-activity-name-synapse.png" alt-text="Shows the &quot;Set variable&quot; activity general tab.":::
+
+1. In the "Variables" tab, select **messageCard** variable for the **Name** property and enter the following **JSON** for its **Value** property:
+
+ ```json
+ {
+ "@type": "MessageCard",
+ "@context": "http://schema.org/extensions",
+ "themeColor": "0076D7",
+ "summary": "Pipeline status alert messageΓÇïΓÇïΓÇïΓÇï",
+ "sections": [
+ {
+ "activityTitle": "Pipeline execution alertΓÇïΓÇïΓÇïΓÇï",
+ "facts": [
+ {
+ "name": "Subscription Id:",
+ "value": "@{pipeline().parameters.subscription}"
+ },
+ {
+ "name": "Resource Group:",
+ "value": "@{pipeline().parameters.resourceGroup}"
+ },
+ {
+ "name": "Data Factory Name:",
+ "value": "@{pipeline().DataFactory}"
+ },
+ {
+ "name": "Pipeline RunId:",
+ "value": "@{pipeline().parameters.runId}"
+ },
+ {
+ "name": "Pipline Name:",
+ "value": "@{pipeline().parameters.name}"
+ },
+ {
+ "name": "Pipeline Status:",
+ "value": "@{pipeline().parameters.status}"
+ },
+ {
+ "name": "Execution Start Time (UTC):",
+ "value": "@{pipeline().parameters.triggerTime}"
+ },
+ {
+ "name": "Execution Finish Time (UTC):",
+ "value": "@{pipeline().parameters.executionEndTime}"
+ },
+ {
+ "name": "Execution Duration (s):",
+ "value": "@{pipeline().parameters.runDuration}"
+ },
+ {
+ "name": "Message:",
+ "value": "@{pipeline().parameters.message}"
+ },
+ {
+ "name": "Notification Time (UTC):",
+ "value": "@{utcnow()}"
+ }
+ ],
+ "markdown": true
+ }
+ ],
+ "potentialAction": [
+ {
+ "@type": "OpenUri",
+ "name": "View pipeline run",
+ "targets": [
+ {
+ "os": "default",
+ "uri": "@{concat('https://adf.azure.com/monitoring/pipelineruns/',pipeline().parameters.runId,'?factory=/subscriptions/',pipeline().parameters.subscription,'/resourceGroups/',pipeline().parameters.resourceGroup,'/providers/Microsoft.DataFactory/factories/',pipeline().DataFactory)}"
+ }
+ ]
+ }
+ ]
+ }
+ ```
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/set-variable-activity-variables-tab-synapse.png" alt-text="Shows the &quot;Set variable&quot; activity variables tab.":::
+
+1. Search for "Web" in the pipeline "Activities" pane, and drag a **Web** activity to the pipeline canvas.
+
+1. Create a dependency condition for the **Web** activity so that it only runs if the **Set Variable** activity succeeds. To create this dependency, select the green handle on the right side of the **Set Variable** activity, drag it, and connect it to the **Web** activity.
+
+1. Select the new **Web** activity on the canvas if it isn't already selected, and its "General" tab, to edit its details.
+
+1. In the "General" pane, specify **Invoke Teams Webhook Url** for **Name** of the **Web** activity.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/web-activity-name-synapse.png" alt-text="Shows the &quot;Web&quot; activity general pane.":::
+
+1. In the "Settings" pane, set following properties as follows:
+
+ | Property | value |
+ | :-- | : |
+ | URL | ``@pipeline().parameters.teamWebhookUrl`` |
+ | Method | ``POST`` |
+ | Body | ``@json(variables('messageCard'))`` |
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/web-activity-settings-pane-synapse.png" alt-text="Shows the &quot;Web&quot; activity settings pane.":::
+
+1. All set and now you're ready to validate, debug, and then publish your **NotifiyTeamsChannelPipeline** pipeline.
+ - To validate the pipeline, select **Validate** from the tool bar.
+ - To debug the pipeline, select **Debug** on the toolbar. You can see the status of the pipeline run in the "Output" tab at the bottom of the window.
+ - Once the pipeline can run successfully, in the top toolbar, select **Publish all**. This action publishes entities you created to Data Factory. Wait until you see the **Successfully** published message.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/validate-debug-publish-synapse.png" alt-text="Shows the &quot;Validate, Debug, Publish&quot; buttons to validate, debug, and then publish your pipeline.":::
+## Sample usage
+In this sample usage scenario, we'll create a master pipeline with three **Execute Pipeline** activities. The first **Execute Pipeline** activity will invoke our ETL pipeline and the remaining two **Execute Pipeline** activities will invoke the "NotifiyTeamsChannelPipeline" pipeline to send relevant success or failure notifications to the Teams channel depending on the execution state of our ETL pipeline.
-3. We recommend adding the current Data Factory **Subscription ID**, **Resource Group**, and the **Teams webhook URL** (refer to
- [prerequisites](#prerequisites)) for the default value of the relevant parameters.
-
- :::image type="content" source="media/how-to-send-notifications-to-teams/webhook-recommended-properties.png" alt-text="Shows the recommended properties of the pipeline created by the &quot;Send notification to a channel in Microsoft Teams&quot; template.":::
+1. Select **Author** tab from the left pane in **Data Factory** or **Integrate** tab from the left pane in **Synapse Studio**. Next, select the + (plus) button, and then select **Pipeline** to create a new pipeline.
- These parameters are used to construct the monitoring URL. Suppose you do not provide a valid subscription and resource group (of the same data factory where the pipelines belong). In that case, the notification will not contain a valid pipeline monitoring URL, but the messages will still work. Additionally, adding these parameters helps prevent the need to always pass those values from another pipeline. If you intend to control those values through a metadata-driven approach, then you should modify them accordingly.
-
-1. Add an **Execute Pipeline** activity into the pipeline from which you would like to send notifications on the Teams channel. Select the pipeline generated from the **Send notification to a channel in Microsoft Teams** template as the **Invoked pipeline** in the **Execute Pipeline** activity.
+1. In the "General" panel under Properties, specify **MasterPipeline** for **Name**. Then collapse the panel by clicking the **Properties** icon in the top-right corner.
+
+1. Search for pipeline in the pipeline "Activities" pane, and drag three **Execute Pipeline** activities to the pipeline canvas.
+
+1. Select first **Execute Pipeline** activity on the canvas if it isn't already selected, and its "General" pane, to edit its details.
+ - For **Name** property of the **Execute Pipeline** activity, we recommend using the name of your invoked ETL pipeline for which you want to send notifications. For example, we used **LoadDataPipeline** for the **Name** of our **Execute Pipeline** activity because it's the name of our invoked pipeline.
+ - In the "Settings" pane, select an existing pipeline or create a new one using the **+ New** button for the **Invoked pipeline** property. For example, in our case, we selected **LoadDataPipeline** pipeline for the "Invoked pipeline" property. Select other options and configure any parameters for the pipeline as required to complete your configuration.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/execute-pipeline-activity-1-general.png" alt-text="Shows the &quot;Execute pipeline&quot; activity general pane for &quot;LoadDataPipeline&quot; pipeline.":::
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/execute-pipeline-activity-1-settings.png" alt-text="Shows the &quot;Execute pipeline&quot; activity setting pane for &quot;LoadDataPipeline&quot; pipeline.":::
+
+1. Select second **Execute Pipeline** activity on the canvas, and it's "General" pane, to edit its details.
+ - Specify **OnSuccess Notification** for **Name** of the **Execute Pipeline** activity.
+ - In the "Settings" pane, select **NotifiyTeamsChannelPipeline** pipeline, which we created earlier, for the **Invoked pipeline** property. Customize the parameters as required based on activity type. For example, I've customized the parameters as follows:
+
+ | Name | Value |
+ | :- | :- |
+ | subscription | ``11111111-0000-aaaa-bbbb-0000000000`` |
+ | resourceGroup | ``contosorg`` |
+ | runId | ``@activity('LoadDataPipeline').output['pipelineRunId']`` |
+ | name | ``@activity('LoadDataPipeline').output['pipelineName']`` |
+ | triggerTime | ``@activity('LoadDataPipeline').ExecutionStartTime`` |
+ | status | ``@activity('LoadDataPipeline').Status`` |
+ | message | ``Pipeline - LoadDataPipeline ran with success.`` |
+ | executionEndTime | ``@activity('LoadDataPipeline').ExecutionEndTime`` |
+ | runDuration | ``@activity('LoadDataPipeline').Duration`` |
+ | teamWebhookUrl | ``https://microsoft.webhook.office.com/webhookb2/1234abcd-1x11-2ff1-ab2c-1234d0699a9e@72f988bf-32b1-41af-91ab-2d7cd011db47/IncomingWebhook/8212f66ad80040ab83cf68b554d9232a/17d524d0-ed5c-44ed-98a0-35c12dd89a6d`` |
+
+ - Create a dependency condition for the second **Execute Pipeline** activity so that it only runs if the first **Execute Pipeline** activity succeeds. To create this dependency, select the green handle on the right side of the first **Execute Pipeline** activity, drag it, and connect it to the second **Execute Pipeline** activity.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/execute-pipeline-activity-2-general.png" alt-text="Shows the second &quot;Execute pipeline&quot; activity &quot;OnSuccess Notification&quot; general pane for &quot;NotifiyTeamsChannelPipeline&quot; pipeline.":::
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/execute-pipeline-activity-2-settings.png" alt-text="Shows the second &quot;Execute pipeline&quot; activity &quot;OnSuccess Notification&quot; setting pane for &quot;NotifiyTeamsChannelPipeline&quot; pipeline.":::
+
+1. Select third **Execute Pipeline** activity on the canvas, and it's "General" pane, to edit its details.
+ - Specify **OnFailure Notification** for **Name** of the **Execute Pipeline** activity.
+ - In the "Settings" pane, select **NotifiyTeamsChannelPipeline** pipeline for the **Invoked pipeline** property. Customize the parameters as required based on activity type. For example, I've customized the parameters this time as follows:
+
+ | Name | Value |
+ | :- | :- |
+ | subscription | ``11111111-0000-aaaa-bbbb-0000000000`` |
+ | resourceGroup | ``contosorg`` |
+ | runId | ``@activity('LoadDataPipeline').output['pipelineRunId']`` |
+ | name | ``@activity('LoadDataPipeline').output['pipelineName']`` |
+ | triggerTime | ``@activity('LoadDataPipeline').ExecutionStartTime`` |
+ | status | ``@activity('LoadDataPipeline').Status`` |
+ | message | ``@activity('LoadDataPipeline').Error['message']`` |
+ | executionEndTime | ``@activity('LoadDataPipeline').ExecutionEndTime`` |
+ | runDuration | ``@activity('LoadDataPipeline').Duration`` |
+ | teamWebhookUrl | ``https://microsoft.webhook.office.com/webhookb2/1234abcd-1x11-2ff1-ab2c-1234d0699a9e@72f988bf-32b1-41af-91ab-2d7cd011db47/IncomingWebhook/8212f66ad80040ab83cf68b554d9232a/17d524d0-ed5c-44ed-98a0-35c12dd89a6d`` |
+
+ - Create a dependency condition for the third **Execute Pipeline** activity so that it only runs if the first **Execute Pipeline** activity fails. To create this dependency, select the red handle on the right side of the first **Execute Pipeline** activity, drag it, and connect it to the third **Execute Pipeline** activity.
+
+ - Validate, debug, and then publish your **MasterPipeline** pipeline.
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/execute-pipeline-activity-3-general.png" alt-text="Shows the third &quot;Execute pipeline&quot; activity &quot;OnFailure Notification&quot; general pane for &quot;NotifiyTeamsChannelPipeline&quot; pipeline.":::
+
+ :::image type="content" source="media/how-to-send-notifications-to-teams/execute-pipeline-activity-3-settings.png" alt-text="Shows the third &quot;Execute pipeline&quot; activity &quot;OnFailure Notification&quot; settings pane for &quot;NotifiyTeamsChannelPipeline&quot; pipeline.":::
+
+1. Run pipeline to receive notifications in Teams. For example, below are sample notifications, when my pipeline ran successfully and when it failed.
- :::image type="content" source="media/how-to-send-notifications-to-teams/execute-pipeline-activity.png" alt-text="Shows the &quot;Execute pipeline&quot; activity in the pipeline created by the &quot;Send notification to a channel in Microsoft Teams&quot; template.":::
+ :::image type="content" source="media/how-to-send-notifications-to-teams/teams-notifications-view-pipeline-run-onsuccess.png" alt-text="Shows on success pipeline notifications in a Teams channel.":::
-1. Customize the parameters as required based on activity type.
+ :::image type="content" source="media/how-to-send-notifications-to-teams/teams-notifications-view-pipeline-run-onfailure.png" alt-text="Shows on failure pipeline notifications in a Teams channel.":::
- :::image type="content" source="media/how-to-send-notifications-to-teams/customize-parameters-by-activity-type.png" alt-text="Shows customization of parameters in the pipeline created by the &quot;Send notification to a channel in Microsoft Teams&quot; template.":::
-  
-1. Receive notifications in Teams.
+1. Select the "View pipeline run" button to view pipeline run.
- :::image type="content" source="media/how-to-send-notifications-to-teams/teams-notifications-view-pipeline-run.png" alt-text="Shows pipeline notifications in a Teams channel.":::
## Add dynamic messages with system variables and expressions You can use [system variables](control-flow-system-variables.md) and [expressions](control-flow-expression-language-functions.md) to
make your messages dynamic. For example:  
- ``@activity("DataFlow").error.Message``
-The above expressions will return the relevant error messages from a failure, which can be sent out as notification on a Teams channel. Refer to the
-[Copy activity output properties](copy-activity-monitoring.md) article for more details.
+The above expressions will return the relevant error messages from a failure, which can be sent out as notification on a Teams channel. For more information about this topic, see the
+[Copy activity output properties](copy-activity-monitoring.md) article.
We also encourage you to review the Microsoft Teams supported [notification payload schema](https://adaptivecards.io/explorer/AdaptiveCard.html) and further customize the above template to your needs. ## Next steps
-[How to send email from a pipeline](how-to-send-email.md)
+[How to send email from a pipeline](how-to-send-email.md)
devtest-labs Devtest Lab Add Devtest User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-devtest-user.md
To add a member:
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-You can add a DevTest Labs User to a lab by using the following Azure PowerShell script. The script requires the user to be in the Azure Active Directory (Azure AD). For information about adding an external user to Azure AD as a guest, see [Add a new guest user](/active-directory/fundamentals/add-users-azure-active-directory#add-a-new-guest-user). If the user isn't in Azure AD, use the portal procedure instead.
+You can add a DevTest Labs User to a lab by using the following Azure PowerShell script. The script requires the user to be in the Azure Active Directory (Azure AD). For information about adding an external user to Azure AD as a guest, see [Add a new guest user](/azure/active-directory/fundamentals/add-users-azure-active-directory#add-a-new-guest-user). If the user isn't in Azure AD, use the portal procedure instead.
In the following script, update the parameter values under the `# Values to change` comment. You can get the `subscriptionId`, `labResourceGroup`, and `labName` values from the lab's main page in the Azure portal.
New-AzRoleAssignment -ObjectId $adObject.Id -RoleDefinitionName 'DevTest Labs Us
## Next steps - [Customize permissions with custom roles](devtest-lab-grant-user-permissions-to-specific-lab-policies.md)-- [Automate adding lab users](automate-add-lab-user.md)
+- [Automate adding lab users](automate-add-lab-user.md)
devtest-labs Devtest Lab Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-reference-architecture.md
Title: Enterprise reference architecture
-description: This article provides reference architecture guidance for Azure DevTest Labs in an enterprise.
+description: See a reference architecture and considerations for Azure DevTest Labs in an enterprise.
Previously updated : 06/26/2020 Last updated : 03/14/2022
-# Azure DevTest Labs reference architecture for enterprises
-This article provides reference architecture to help you deploy a solution based on Azure DevTest Labs in an enterprise. It includes the following:
-- On-premises connectivity via Azure ExpressRoute-- A remote desktop gateway to remotely sign in to virtual machines-- Connectivity to an artifact repository for private artifacts-- Other PaaS services that are used in labs
+# DevTest Labs enterprise reference architecture
+
+This article provides a reference architecture for deploying Azure DevTest Labs in an enterprise. The architecture includes the following key elements:
-![Reference architecture diagram](./media/devtest-lab-reference-architecture/reference-architecture.png)
+- On-premises connectivity via Azure ExpressRoute
+- A remote desktop gateway to remotely sign in to virtual machines (VMs)
+- Connectivity to a private artifact repository
+- Other platform-as-a-service (PaaS) components that labs use
## Architecture
-These are the key elements of the reference architecture:
--- **Azure Active Directory (Azure AD)**: DevTest Labs uses the [Azure AD service for identity management](../active-directory/fundamentals/active-directory-whatis.md). Consider these two key aspects when you give users access to an environment based on DevTest Labs:
- - **Resource management**: It provides access to the Azure portal to manage resources (create virtual machines; create environments; start, stop, restart, delete, and apply artifacts; and so on). Resource management is done by using Azure role-based access control (Azure RBAC). You assign roles to users and set resource and access-level permissions.
- - **Virtual machines (network-level)**: In the default configuration, virtual machines use a local admin account. If there's a domain available ([Azure AD Domain Services](../active-directory-domain-services/overview.md), an on-premises domain, or a cloud-based domain), machines can be joined to the domain. Users can then use their domain-based identities to connect to the VMs.
-- **On-premises connectivity**: In our architecture diagram, [ExpressRoute](../expressroute/expressroute-introduction.md) is used. But you can also use a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md). Although ExpressRoute isn't required for DevTest Labs, itΓÇÖs commonly used in enterprises. ExpressRoute is required only if you need access to corporate resources. Common scenarios are:
- - You have on-premises data that can't be moved to the cloud.
- - You prefer to join the lab's virtual machines to the on-premises domain.
- - You want to force all network traffic in and out of the cloud environment through an on-premises firewall for security/compliance.
-- **Network security groups**: A common way to restrict traffic to the cloud environment (or within the cloud environment) based on source and destination IP addresses is to use a [network security group](../virtual-network/network-security-groups-overview.md). For example, you want to allow only traffic that originates from the corporate network into the labΓÇÖs networks.-- **Remote desktop gateway**: Enterprises typically block outgoing remote desktop connections at the corporate firewall. There are several options to enable connectivity to the cloud-based environment in DevTest Labs, including:
- - Use a [remote desktop gateway](/windows-server/remote/remote-desktop-services/desktop-hosting-logical-architecture), and allow the static IP address of the gateway load balancer.
- - [Direct all incoming RDP traffic](../vpn-gateway/vpn-gateway-forced-tunneling-rm.md) over the ExpressRoute/site-to-site VPN connection. This functionality is a common consideration when enterprises plan a DevTest Labs deployment.
-- **Network services (virtual networks, subnets)**: The [Azure networking](../networking/fundamentals/networking-overview.md) topology is another key element in the DevTest Labs architecture. It controls whether resources from the lab can communicate and have access to on-premises and the internet. Our architecture diagram includes the most common ways that customers use DevTest Labs: All labs connect via [virtual network peering](../virtual-network/virtual-network-peering-overview.md) by using a [hub-spoke model](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) to the ExpressRoute/site-to-site VPN connection to on-premises. But DevTest Labs uses Azure Virtual Network directly, so there are no restrictions on how you set up the networking infrastructure.-- **DevTest Labs**: DevTest Labs is a key part of the overall architecture. To learn more about the service, see [About DevTest Labs](devtest-lab-overview.md).-- **Virtual machines and other resources (SaaS, PaaS, IaaS)**: Virtual machines are a key workload that DevTest Labs supports along with other Azure resources. DevTest Labs makes it easy and fast for an enterprise to provide access to Azure resources (including VMs and other Azure resources). Learn more about access to Azure for [developers](devtest-lab-developer-lab.md) and [testers](devtest-lab-test-env.md).+
+The following diagram shows a typical DevTest Labs enterprise deployment. This architecture connects several labs in different Azure subscriptions to a company's on-premises network.
+
+![Diagram that shows a reference architecture for an enterprise DevTest Labs deployment.](./media/devtest-lab-reference-architecture/reference-architecture.png)
+
+### DevTest Labs components
+
+DevTest Labs makes it easy and fast for enterprises to provide access to Azure resources. Each lab contains software-as-a-service (SaaS), infrastructure-as-a-service (IaaS), and PaaS resources. Lab users can create and configure VMs, PaaS environments, and VM [artifacts]().
+
+In the preceding diagram, **Team Lab 1** in **Azure Subscription 1** shows an example of Azure components that labs can access and use. For more information, see [About DevTest Labs](devtest-lab-overview.md).
+
+### Connectivity components
+
+You need on-premises connectivity if your labs must access on-premises corporate resources. Common scenarios are:
+
+- Some on-premises data can't move to the cloud.
+- You want to join lab VMs to an on-premises domain.
+- You want to force all cloud network traffic through an on-premises firewall for security or compliance reasons.
+
+This architecture uses [ExpressRoute](../expressroute/expressroute-introduction.md) for connectivity to the on-premises network. You can also use a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md).
+
+On-premises, a [remote desktop gateway](/windows-server/remote/remote-desktop-services/desktop-hosting-logical-architecture) enables outgoing remote desktop protocol (RDP) connections to DevTest Labs. Enterprise corporate firewalls usually block outgoing connections at the corporate firewall. To enable connectivity, you can:
+
+- Use a remote desktop gateway, and allow the static IP address of the gateway load balancer.
+- Use [forced tunneling](../vpn-gateway/vpn-gateway-forced-tunneling-rm.md) to redirect all RDP traffic back over the ExpressRoute or site-to-site VPN connection. Forced tunneling is common functionality for enterprise-scale DevTest Labs deployments.
+
+### Networking components
+
+In this architecture, [Azure Active Directory (Azure AD)](/azure/active-directory/fundamentals/active-directory-whatis) provides identity and access management across all networks. Lab VMs usually have a local administrative account for access. If there's an Azure AD, on-premises, or [Azure AD Domain Services](../active-directory-domain-services/overview.md) domain available, you can join lab VMs to the domain. Users can then use their domain-based identities to connect to the VMs.
+
+[Azure networking topology](../networking/fundamentals/networking-overview.md) controls how lab resources access and communicate with on-premises networks and the internet. This architecture shows a common way that enterprises network DevTest Labs. The labs connect with [peered virtual networks](../virtual-network/virtual-network-peering-overview.md) in a [hub-spoke configuration](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke), through the ExpressRoute or site-to-site VPN connection, to the on-premises network.
+
+Because DevTest Labs uses Azure Virtual Network directly, there are no restrictions on how you set up the networking infrastructure. You can set up a [network security group](../virtual-network/network-security-groups-overview.md) to restrict cloud traffic based on source and destination IP addresses. For example, you can allow only traffic that originates from the corporate network into the lab's networks.
## Scalability considerations
-Although DevTest Labs doesnΓÇÖt have built-in quotas or limits, other Azure resources that are used in the typical operation of a lab do have [subscription-level quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). So, in a typical enterprise deployment, you need multiple Azure subscriptions to cover a large deployment of DevTest Labs. The quotas that enterprises most commonly reach are:
-- **Resource groups**: In the default configuration, DevTest Labs creates a resource group for every new virtual machine, or the user creates an environment by using the service. Subscriptions can contain [up to 980 resource groups](../azure-resource-manager/management/azure-subscription-service-limits.md#subscription-limits). So, that's the limit of virtual machines and environments in a subscription. There are two other configurations that you should consider:
- - **[All virtual machines go to the same resource group](resource-group-control.md)**: Although this setup helps you meet the resource group limit, it affects the resource-type-per-resource-group limit.
- - **Using Shared Public IPs**: All VMs of the same size and region go into the same resource group. This configuration is a "middle ground" between resource group quotas and resource-type-per-resource-group quotas, if the virtual machines are allowed to have public IP addresses.
-- **Resources per resource group per resource type**: The default limit for [resources per resource group per resource type is 800](../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits). When you use the *all VMs go to the same resource group* configuration, users hit this subscription limit much sooner, especially if the VMs have many extra disks.-- **Storage accounts**: A lab in DevTest Labs comes with a storage account. The Azure quota for [number of storage accounts per region per subscription is 250](../azure-resource-manager/management/azure-subscription-service-limits.md#storage-limits). The maximum number of DevTest Labs in the same region is also 250.-- **Role assignments**: A role assignment is how you give a user or principal access to a resource (owner, resource, permission level). In Azure, there's a [limit of 2,000 role assignments per subscription](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-rbac-limits). By default, the DevTest Labs service creates a resource group for each VM. The owner is granted *owner* permission for the DevTest Labs VM and *reader* permission to the resource group. In this way, each new VM that you create uses two role assignments in addition to the assignments that are used when you give users permission to the lab.-- **API reads/writes**: There are various ways to automate Azure and DevTest Labs, including REST APIs, PowerShell, Azure CLI, and Azure SDK. Through automation, you might hit another limit on API requests: Each subscription allows up to [12,000 read requests and 1,200 write requests per hour](../azure-resource-manager/management/request-limits-and-throttling.md). Be aware of this limit when you automate DevTest Labs.
+DevTest Labs has no built-in quotas or limits, but other Azure resources that labs use have [subscription-level quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). In a typical enterprise deployment, you need several Azure subscriptions to cover a large DevTest Labs deployment. Enterprises commonly reach the following quotas:
+
+- Resource groups. DevTest Labs creates a resource group for every new VM, and lab users create environments in resource groups. Subscriptions can contain [up to 980 resource groups](../azure-resource-manager/management/azure-subscription-service-limits.md#subscription-limits), so that's the limit of VMs and environments in a subscription.
+
+ Two strategies can help you stay under resource group limits:
+
+ - [All VMs go in the same resource group](resource-group-control.md). This strategy helps you meet the resource group limit, but it affects the resource-type-per-resource-group limit.
+ - [Use shared public IPs](devtest-lab-shared-ip.md). If VMs are allowed to have public IP addresses, put all VMs of the same size and region into the same resource group. This configuration helps meet both resource group quotas and resource-type-per-resource-group quotas.
+
+- Resources per resource group per resource type. The default limit for [resources per resource group per resource type is 800](../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits). Putting all VMs in the same resource group hits this limit much sooner, especially if the VMs have many extra disks.
+
+- Storage accounts. Every lab in DevTest Labs comes with a storage account. The Azure quota for [number of storage accounts per region per subscription is 250](../azure-resource-manager/management/azure-subscription-service-limits.md#storage-limits). So the maximum number of DevTest Labs in one region is also 250.
+
+- Role assignments. A role assignment gives a user or principal access to a resource. Azure has a limit of [2,000 role assignments per subscription](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-rbac-limits).
+
+ By default, DevTest Labs creates a resource group for each lab VM. The VM creator gets *owner* permission for the VM and *reader* permission to the resource group. So each lab VM uses two role assignments. Granting user permissions to the lab also uses role assignments.
+
+- API reads/writes. You can automate Azure and DevTest Labs by using REST APIs, PowerShell, Azure CLI, and Azure SDK. Each Azure subscription allows up to [12,000 read requests and 1,200 write requests per hour](../azure-resource-manager/management/request-limits-and-throttling.md). By automating DevTest Labs, you might hit the limit on API requests.
## Manageability considerations
-DevTest Labs has a great administrative user interface for working with a single lab. But in an enterprise, you likely have multiple Azure subscriptions and many labs. Making changes consistently to all your labs requires scripting/automation. Here are some examples and best management practices for a DevTest Labs deployment:
-- **Changes to lab settings**: A common scenario is to update a specific lab setting across all labs in the deployment. For example, a new VM instance size is available, and all labs must be updated to allow it. ItΓÇÖs best to automate these changes by using PowerShell scripts, the CLI, or REST APIs. -- **Artifact repository personal access token**: Typically, personal access tokens for a Git repository expire in 90 days, one year, or two years. To ensure continuity, itΓÇÖs important to extend the personal access token or create a new one. Then use automation to apply the new personal access token to all the labs.-- **Restrict changes to a lab setting**: Often, a particular setting must be restricted (such as allowing use of marketplace images). You can use Azure Policy to prevent changes to a resource type. Or you can create a custom role, and grant users that role instead of the *owner* role for the lab. You can do this for most settings in the lab (internal support, lab announcement, allowed VM sizes, and so on).-- **Require VMs to follow a naming convention**: Managers commonly want to easily identify VMs that are part of a cloud-based development and testing environment. You can do this by using [Azure Policy](https://github.com/Azure/azure-policy/tree/master/samples/TextPatterns/allow-multiple-name-patterns).
+You can use the Azure portal to manage a single DevTest Labs instance at a time, but enterprises might have multiple Azure subscriptions and many labs to administer. Making changes consistently to all labs requires scripting automation.
+
+Here are some examples of using scripting in DevTest Labs deployments:
+
+- Changing lab settings. Update a specific lab setting across all labs by using PowerShell scripts, Azure CLI, or REST APIs. For example, update all labs to allow a new VM instance size.
+
+- Updating artifact repository personal access tokens (PATs). PATs for Git repositories typically expire in 90 days, one year, or two years. To ensure continuity, it's important to extend the PAT. Or, create a new PAT and use automation to apply it to all labs.
-ItΓÇÖs important to note that DevTest Labs uses underlying Azure resources that are managed the same way: networking, disks, compute, and so on. For example, Azure Policy applies to virtual machines that are created in a lab. Microsoft Defender for Cloud can report on VM compliance. And the Azure Backup service can provide regular backups for the VMs in the lab.
+- Restricting changes to lab settings. To restrict certain settings, such as allowing marketplace image use, you can use Azure Policy to prevent changes to a resource type. Or you can create a custom role, and grant users that role instead of a built-in lab role. You can restrict changes for most lab settings, such as internal support, lab announcements, and allowed VM sizes.
+
+- Applying a naming convention for VMs. You can use Azure Policy to [specify a naming pattern](https://github.com/Azure/azure-policy/tree/master/samples/TextPatterns/allow-multiple-name-patterns) that helps identify VMs in cloud-based environments.
+
+You manage Azure resources for DevTest Labs the same way as for other purposes. For example, Azure Policy applies to VMs you create in a lab. Microsoft Defender for Cloud can report on lab VM compliance. Azure Backup can provide regular backups for lab VMs.
## Security considerations
-Azure DevTest Labs uses existing resources in Azure (compute, networking, and so on). So it automatically benefits from the security features that are built into the platform. For example, to require incoming remote desktop connections to originate only from the corporate network, simply add a network security group to the virtual network on the remote desktop gateway. The only additional security consideration is the level of permissions that you grant to team members who use the labs on a day-to-day basis. The most common permissions are [*owner* and *user*](devtest-lab-add-devtest-user.md). For more information about these roles, see [Add owners and users in Azure DevTest Labs](devtest-lab-add-devtest-user.md).
+
+DevTest Labs automatically benefits from built-in Azure security features. To require incoming remote desktop connections to originate only from the corporate network, you can add a network security group to the virtual network on the remote desktop gateway.
+
+Another security consideration is the permission level you grant to lab users. Lab owners use Azure role-based access control (Azure RBAC) to assign roles to users and set resource and access-level permissions. The most common DevTest Labs permissions are Owner, Contributor, and User. You can also create and assign [custom roles](devtest-lab-grant-user-permissions-to-specific-lab-policies.md). For more information, see [Add owners and users in Azure DevTest Labs](devtest-lab-add-devtest-user.md).
## Next steps
-See the next article in this series: [Scale up your Azure DevTest Labs infrastructure](devtest-lab-guidance-scale.md).
+See the next article in this series: [Deliver a proof of concept](deliver-proof-concept.md).
devtest-labs Devtest Lab Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-vm-powershell.md
Title: Create a virtual machine in Azure DevTest Labs with Azure PowerShell
-description: Learn how to use Azure DevTest Labs to create and manage virtual machines with Azure PowerShell.
+ Title: Create a lab virtual machine by using Azure PowerShell
+description: Learn how to use Azure PowerShell to create and manage virtual machines in Azure DevTest Labs.
Previously updated : 06/26/2020 Last updated : 03/17/2022
-# Create a virtual machine with DevTest Labs using Azure PowerShell
-This article shows you how to create a virtual machine in Azure DevTest Labs by using Azure PowerShell. You can use PowerShell scripts to automate creation of virtual machines in a lab in Azure DevTest Labs.
+# Create DevTest Labs VMs by using Azure PowerShell
+
+This article shows you how to create an Azure DevTest Labs virtual machine (VM) in a lab by using Azure PowerShell. You can use PowerShell scripts to automate lab VM creation.
## Prerequisites
-Before you begin:
-- [Create a lab](devtest-lab-create-lab.md) if you don't want to use an existing lab to test the script or commands in this article. -- [Install Azure PowerShell](/powershell/azure/install-az-ps) or use Azure Cloud Shell that's integrated into the Azure portal.
+You need the following prerequisites to work through this article:
-## PowerShell script
-The sample script in this section uses the [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction) cmdlet. This cmdlet takes the lab's resource ID, name of the action to perform (`createEnvironment`), and the parameters necessary perform that action. The parameters are in a hash table that contains all the virtual machine description properties.
+- Access to a lab in DevTest Labs. [Create a lab](devtest-lab-create-lab.md), or use an existing lab.
+- Azure PowerShell. [Install Azure PowerShell](/powershell/azure/install-az-ps), or [use Azure Cloud Shell](/azure/cloud-shell/quickstart-powershell) in the Azure portal.
-```powershell
+## PowerShell VM creation script
+
+The PowerShell [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction) cmdlet invokes the `createEnvironment` action with the lab's resource ID and VM parameters. The parameters are in a hash table that contains all the VM properties. The properties are different for each type of VM. To get the properties for the VM type you want, see [Get VM properties](#get-vm-properties).
+
+This sample script creates a Windows Server 2019 Datacenter VM. The sample also includes properties to add a second data disk under `dataDiskParameters`.
+
+ ```powershell
[CmdletBinding()] Param(
try {
throw "Unable to find lab $LabName resource group $LabResourceGroup in subscription $SubscriptionId." }
- #For this example, we are getting the first allowed subnet in the first virtual network
- # for the lab.
- #If a specific virtual network is needed use | to find it.
- #ie $virtualNetwork = @(Get-AzResource -ResourceType 'Microsoft.DevTestLab/labs/virtualnetworks' -ResourceName $LabName -ResourceGroupName $lab.ResourceGroupName -ApiVersion $API_VERSION) | Where-Object Name -EQ "SpecificVNetName"
- $virtualNetwork = @(Get-AzResource -ResourceType 'Microsoft.DevTestLab/labs/virtualnetworks' -ResourceName $LabName -ResourceGroupName $lab.ResourceGroupName -ApiVersion $API_VERSION)[0]
+ #The preceding command puts the VM in the first allowed subnet in the first virtual network for the lab.
+ #If you need to use a specific virtual network, use | to find the network. For example:
+ #$virtualNetwork = @(Get-AzResource -ResourceType 'Microsoft.DevTestLab/labs/virtualnetworks' -ResourceName $LabName -ResourceGroupName $lab.ResourceGroupName -ApiVersion $API_VERSION) | Where-Object Name -EQ "SpecificVNetName"
+ $labSubnetName = $virtualNetwork.properties.allowedSubnets[0].labSubnetName
- #Prepare all the properties needed for the createEnvironment
- # call used to create the new VM.
- # The properties will be slightly different depending on the base of the vm
- # (a marketplace image, custom image or formula).
- # The setup of the virtual network to be used may also affect the properties.
- # This sample includes the properties to add an additional disk under dataDiskParameters
+ #Prepare all the properties needed for the createEnvironment call.
+ # The properties are slightly different depending on the type of VM base.
+ # The virtual network setup might also affect the properties.
$parameters = @{ "name" = $NewVmName;
try {
"properties" = @{ "labVirtualNetworkId" = $virtualNetwork.ResourceId; "labSubnetName" = $labSubnetName;
- "notes" = "Windows Server 2016 Datacenter";
+ "notes" = "Windows Server 2019 Datacenter";
"osType" = "windows"
- "expirationDate" = "2019-12-01"
+ "expirationDate" = "2022-12-01"
"galleryImageReference" = @{ "offer" = "WindowsServer"; "publisher" = "MicrosoftWindowsServer";
- "sku" = "2016-Datacenter";
+ "sku" = "2019-Datacenter";
"osType" = "Windows"; "version" = "latest" };
try {
} }
- #The following line is the same as invoking
- # https://azure.github.io/projects/apis/#!/Labs/Labs_CreateEnvironment rest api
+ #The following line has the same effect as invoking the
+ # https://azure.github.io/projects/apis/#!/Labs/Labs_CreateEnvironment REST API
Invoke-AzResourceAction -ResourceId $lab.ResourceId -Action 'createEnvironment' -Parameters $parameters -ApiVersion $API_VERSION -Force -Verbose }
finally {
} ```
-The properties for the virtual machine in the above script allow us to create a virtual machine with Windows Server 2016 DataCenter as the OS. For each type of virtual machine, these properties will be slightly different. The [Define virtual machine](#define-virtual-machine) section shows you how to determine which properties to use in this script.
-
-The following command provides an example of running the script saved in a file name: Create-LabVirtualMachine.ps1.
+Save the preceding script in a file named *Create-LabVirtualMachine.ps1*. Run the script by using the following command. Enter your own values for the placeholders.
```powershell
- PS> .\Create-LabVirtualMachine.ps1 -ResourceGroupName 'MyLabResourceGroup' -LabName 'MyLab' -userName 'AdminUser' -password 'Password1!' -VMName 'MyLabVM'
+.\Create-LabVirtualMachine.ps1 -ResourceGroupName '<lab resource group name>' -LabName '<lab name>' -userName '<VM administrative username>' -password '<VM admin password>' -VMName '<VM name to create>'
```
-## Define virtual machine
-This section shows you how to get the properties that are specific to a type of virtual machine that you want to create.
-
-### Use Azure portal
-You can generate an Azure Resource Manager template when creating a VM in the Azure portal. You don't need to complete the process of creating the VM. You only follow the steps until you see the template. This is the best way to get the necessary JSON description if you do not already have a lab VM created.
-
-1. Navigate to the [Azure portal](https://portal.azure.com).
-2. Select **All Services** on the left navigational menu.
-3. Search for and select **DevTest Labs** from the list of services.
-4. On the **DevTest Labs** page, select your lab in the list of labs.
-5. On the home page for your lab, select **+ Add** on the toolbar.
-6. Select a **base image** for the VM.
-7. Select **automation options** at the bottom of the page above the **Submit** button.
-8. You see the **Azure Resource Manager template** for creating the virtual machine.
-9. The JSON segment in the **resources** section has the definition for the image type you selected earlier.
-
- ```json
- {
- "apiVersion": "2018-10-15-preview",
- "type": "Microsoft.DevTestLab/labs/virtualmachines",
- "name": "[variables('vmName')]",
- "location": "[resourceGroup().location]",
- "properties": {
- "labVirtualNetworkId": "[variables('labVirtualNetworkId')]",
- "notes": "Windows Server 2019 Datacenter",
- "galleryImageReference": {
- "offer": "WindowsServer",
- "publisher": "MicrosoftWindowsServer",
- "sku": "2019-Datacenter",
- "osType": "Windows",
- "version": "latest"
- },
- "size": "[parameters('size')]",
- "userName": "[parameters('userName')]",
- "password": "[parameters('password')]",
- "isAuthenticationWithSshKey": false,
- "labSubnetName": "[variables('labSubnetName')]",
- "disallowPublicIpAddress": true,
- "storageType": "Standard",
- "allowClaim": false,
- "networkInterface": {
- "sharedPublicIpAddressConfiguration": {
- "inboundNatRules": [
- {
- "transportProtocol": "tcp",
- "backendPort": 3389
- }
- ]
+## Get VM properties
+
+This section shows how to get the specific properties for the type of VM you want to create. You can get the properties from an Azure Resource Manager (ARM) template in the Azure portal, or by calling the DevTest Labs Azure REST API.
+
+### Use the Azure portal to get VM properties
+
+Creating a VM in the Azure portal generates an Azure Resource Manager (ARM) template that shows the VM's properties. Once you choose a VM base, you can see the ARM template and get the properties without actually creating the VM. This method is the easiest way to get the JSON VM description if you don't already have a lab VM of that type.
+
+1. In the [Azure portal](https://portal.azure.com), on the **Overview** page for your lab, select **Add** on the top toolbar.
+1. On the **Choose a base** page, select the VM type you want. Depending on lab settings, the VM base can be an Azure Marketplace image, a custom image, a formula, or an environment.
+1. On the **Create lab resource** page, optionally [add artifacts](add-artifact-vm.md) and configure any other settings you want on the **Basic settings** and **Advanced settings** tabs.
+1. On the **Advanced settings** tab, select **View ARM template** at the bottom of the page.
+1. On the **View Azure Resource Manager template** page, review the JSON template for creating the VM. The **resources** section has the VM properties.
+
+ For example, the following `resources` section has the properties for a Windows Server 2022 Datacenter VM:
+ ```json
+ "resources": [
+ {
+ "apiVersion": "2018-10-15-preview",
+ "type": "Microsoft.DevTestLab/labs/virtualmachines",
+ "name": "[variables('vmName')]",
+ "location": "[resourceGroup().location]",
+ "properties": {
+ "labVirtualNetworkId": "[variables('labVirtualNetworkId')]",
+ "notes": "Windows Server 2022 Datacenter: Azure Edition Core",
+ "galleryImageReference": {
+ "offer": "WindowsServer",
+ "publisher": "MicrosoftWindowsServer",
+ "sku": "2022-datacenter-azure-edition-core",
+ "osType": "Windows",
+ "version": "latest"
+ },
+ "size": "[parameters('size')]",
+ "userName": "[parameters('userName')]",
+ "password": "[parameters('password')]",
+ "isAuthenticationWithSshKey": false,
+ "labSubnetName": "[variables('labSubnetName')]",
+ "disallowPublicIpAddress": true,
+ "storageType": "Standard",
+ "allowClaim": false,
+ "networkInterface": {
+ "sharedPublicIpAddressConfiguration": {
+ "inboundNatRules": [
+ {
+ "transportProtocol": "tcp",
+ "backendPort": 3389
+ }
+ ]
+ }
+ }
+ }
}
- }
- }
- }
- ```
+ ],
+ ```
+
+1. Copy and save the template to use in future PowerShell automation, and transfer the properties to the PowerShell VM creation script.
+
+
+
+### Use the DevTest Labs Azure REST API to get VM properties
-In this example, you see how to get a definition of an Azure Market Place image. You can get a definition of a custom image, a formula, or an environment in the same way. Add any artifacts needed for the virtual machine, and set any advanced settings required. After providing values for the required fields, and any optional fields, before selecting the **Automation options** button.
+You can also call the DevTest Labs REST API to get the properties of existing lab VMs. You can use those properties to create more lab VMs of the same types.
-### Use Azure REST API
-The following procedure gives you steps to get properties of an image by using the REST API: These steps work only for an existing VM in a lab.
+1. On the [Virtual Machines - list](/rest/api/dtl/virtualmachines/list) page, select **Try it** above the first code block.
+1. On the **REST API Try It** page:
+ - Under **labName**, enter your lab name.
+ - Under **labResourceGroup**, enter the lab resource group name.
+ - Under **subscriptionId**, select the lab's Azure subscription.
+1. Select **Run**.
+1. In the **Response** section under **Body**, view the properties for all the existing VMs in the lab.
-1. Navigate to the [Virtual Machines - list](/rest/api/dtl/virtualmachines/list) page, select **Try it** button.
-2. Select your **Azure subscription**.
-3. Enter the **resource group for the lab**.
-4. Enter the **name of the lab**.
-5. Select **Run**.
-6. You see the **properties for the image** based on which the VM was created.
+## Set VM expiration date
-## Set expiration date
-In scenarios such as training, demos and trials, you may want to create virtual machines and delete them automatically after a fixed duration so that you donΓÇÖt incur unnecessary costs. You can set an expiration date for a VM while creating it using PowerShell as shown in the example [PowerShell script](#powershell-script) section.
+In training, demo, and trial scenarios, you can avoid unnecessary costs by deleting VMs automatically on a certain date. You can set the VM `expirationDate` property when you create a VM. The PowerShell VM creation script earlier in this article sets an expiration date under `properties`:
-Here is a sample PowerShell script that sets expiration date for all existing VMs in a lab:
+```json
+ "expirationDate" = "2022-12-01"
+```
+
+You can also set expiration dates on existing VMs by using PowerShell. The following PowerShell script sets an expiration date for an existing lab VM if it doesn't already have an expiration date:
```powershell
-# Values to change
-$subscriptionId = '<Enter the subscription Id that contains lab>'
-$labResourceGroup = '<Enter the lab resource group>'
-$labName = '<Enter the lab name>'
-$VmName = '<Enter the VmName>'
-$expirationDate = '<Enter the expiration date e.g. 2019-12-16>'
+# Enter your own values:
+$subscriptionId = '<Lab subscription Id>'
+$labResourceGroup = '<Lab resource group>'
+$labName = '<Lab name>'
+$VmName = '<VM name>'
+$expirationDate = '<Expiration date, such as 2022-12-16>'
-# Log into your Azure account
-Login-AzureRmAccount
+# Sign in to your Azure account
-Select-AzureRmSubscription -SubscriptionId $subscriptionId
+Select-AzSubscription -SubscriptionId $subscriptionId
$VmResourceId = "subscriptions/$subscriptionId/resourcegroups/$labResourceGroup/providers/microsoft.devtestlab/labs/$labName/virtualmachines/$VmName"
-$vm = Get-AzureRmResource -ResourceId $VmResourceId -ExpandProperties
+$vm = Get-AzResource -ResourceId $VmResourceId -ExpandProperties
-# Get all the Vm properties
+# Get the Vm properties
$VmProperties = $vm.Properties # Set the expirationDate property If ($VmProperties.expirationDate -eq $null) {
- $VmProperties | Add-Member -MemberType NoteProperty -Name expirationDate -Value $expirationDate
+ $VmProperties | Add-Member -MemberType NoteProperty -Name expirationDate -Value $expirationDate -Force
} Else { $VmProperties.expirationDate = $expirationDate }
-Set-AzureRmResource -ResourceId $VmResourceId -Properties $VmProperties -Force
+Set-AzResource -ResourceId $VmResourceId -Properties $VmProperties -Force
``` - ## Next steps
-See the following content: [Azure PowerShell documentation for Azure DevTest Labs](/powershell/module/az.devtestlabs/)
+
+[Az.DevTestLabs PowerShell reference](/powershell/module/az.devtestlabs/)
devtest-labs Encrypt Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/encrypt-storage.md
Title: Encrypt an Azure storage account used by a lab
-description: Learn how to configure encryption of an Azure storage used by a lab in Azure DevTest Labs
+ Title: Manage storage accounts for labs
+description: Learn about DevTest Labs storage accounts, encryption, customer-managed keys, and setting expiration dates for artifact results storage.
Previously updated : 07/29/2020 Last updated : 03/15/2022
-# Encrypt Azure storage used by a lab in Azure DevTest Labs
-Every lab created in Azure DevTest Labs is created with an associated Azure storage account. The storage account is used for the following purposes:
+# Manage Azure DevTest Labs storage accounts
-- Storing [formula](devtest-lab-manage-formulas.md) documents that can be used to create virtual machines.-- Storing artifact results that include deployment and extension logs generated from applying artifacts. -- [Uploading virtual hard disks (VHDs) to create custom images in the lab.](devtest-lab-create-template.md)-- Caching frequently used [artifacts](add-artifact-vm.md) and [Azure Resource Manager templates](devtest-lab-create-environment-from-arm.md) for faster retrieval during virtual machine/environment creation.
+This article explains how to view and manage the Azure Storage accounts associated with Azure DevTest Labs.
-> [!NOTE]
-> The information above is critical for the lab to operate. It's stored for the life of the lab (and lab resources) unless explicitly deleted. Manually deleting these resources can lead to errors in creating lab VMs and/or formulas becoming corrupt for future use.
+## View storage account contents
-## Locate the storage account and view its contents
+DevTest Labs automatically creates an Azure Storage account for every lab it creates. To see a lab's storage account and the information it holds:
-1. On the home page for the lab, select the **resource group** on the **Overview** page. You should see the **Resource group** page for the resource group that contains the lab.
+1. On the lab's **Overview** page, select the **Resource group**.
- :::image type="content" source="./media/encrypt-storage/overview-resource-group-link.png" alt-text="Select resource group on the Overview page":::
-1. Select the Azure storage account of the lab. The naming convention for the lab storage account is: `a<labNameWithoutInvalidCharacters><4-digit number>`. For example, if the lab name is `contosolab`, the storage account name could be `acontosolab7576`.
+ :::image type="content" source="./media/encrypt-storage/overview-resource-group-link.png" alt-text="Screenshot that shows selecting the resource group on the lab Overview page.":::
- :::image type="content" source="./media/encrypt-storage/select-storage-account.png" alt-text="Select storage account in the resource group of the lab":::
-3. On the **Storage account** page, select **Storage Explorer (preview)** on the left menu, and then select **BLOB CONTAINERS** to find relevant lab-related content.
+1. On the resource group's **Overview** page, select the lab's storage account. The naming convention for the lab storage account is: `a<labName><4-digit number>`. For example, if the lab name is `contosolab`, the storage account name could be `acontosolab5237`.
- :::image type="content" source="./media/encrypt-storage/storage-explorer.png" alt-text="Storage Explorer (Preview)" lightbox="./media/encrypt-storage/storage-explorer.png":::
+ :::image type="content" source="./media/encrypt-storage/select-storage-account.png" alt-text="Screenshot that shows selecting the storage account in the lab's resource group.":::
-## Encrypt the lab storage account
-Azure Storage automatically encrypts your data when it's persisted to the cloud. Azure Storage encryption protects your data and helps you to meet your organizational security and compliance commitments. For more information, see [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md).
+3. On the **Storage account** page, select **Storage browser (preview)** on the left menu, and then select **Blob containers** to see relevant lab-related content.
-Data in the lab storage account is encrypted with a **Microsoft-managed key**. You can rely on Microsoft-managed keys for the encryption of your data, or you can manage encryption with your own keys. If you choose to manage encryption with your own keys for the labΓÇÖs storage account, you can specify a **customer-managed key** with Azure Key Vault to use for encrypting/decrypting data in Blob storage and in Azure Files. For more information about customer-managed keys, see [Use customer-managed keys with Azure Key Vault to manage Azure Storage encryption](../storage/common/customer-managed-keys-overview.md).
+ :::image type="content" source="./media/encrypt-storage/storage-explorer.png" alt-text="Screenshot that shows the Storage browser (preview).":::
-To learn how to configure customer-managed keys for Azure Storage encryption, see the following articles:
+## Manage Azure Storage lifecycle
-- [Azure portal](../storage/common/customer-managed-keys-configure-key-vault.md)-- [Azure PowerShell](../storage/common/customer-managed-keys-configure-key-vault.md)-- [Azure CLI](../storage/common/customer-managed-keys-configure-key-vault.md)
+The lab storage account stores:
+- [Formula documents](devtest-lab-manage-formulas.md) to use for creating lab virtual machines (VMs).
+- [Uploaded virtual hard disks (VHDs)](devtest-lab-create-template.md) to use for creating custom VM images.
+- [Artifact](add-artifact-vm.md) and [Azure Resource Manager (ARM) template](devtest-lab-create-environment-from-arm.md) caches, for faster retrieval during VM and environment creation.
+- Artifact results, which are deployment and extension logs generated from applying artifacts.
-## Manage the Azure Blob storage life cycle
-As mentioned, the information stored in the LabΓÇÖs storage account is critical for the lab to operate without any errors. Unless explicitly deleted, this data will continue to remain in the labΓÇÖs storage account for the life of the lab or the life of specific lab virtual machines, depending on the type of data.
+The information in the lab storage account persists for the life of the lab and its resources, unless explicitly deleted. Most of this information is critical for the lab to operate. Manually deleting storage account information can cause data corruption or VM creation errors.
-### Uploaded VHDs
-These VHDs are used to create custom images. Removing them will make it no longer possible to create custom images from these VHDs.
+- Removing uploaded VHDs makes it no longer possible to create custom images from these VHDs.
+- Deleting formula documents can lead to errors when creating VMs from formulas, updating formulas, or creating new formulas.
+- DevTest Labs refreshes the artifact and ARM template caches whenever the lab connects to the artifact or template repositories. If you remove the caches manually, DevTest Labs recreates the caches the next time the lab connects to the repositories.
-### Artifacts Cache
-These caches will be re-created any time artifacts are applied. They'll be refreshed with the latest content from the respective referenced repositories. So, if you delete this information to save Storage-related expenses, the relief will be temporary.
+### Set expiration for artifact results
-### Azure Resource Manager template Cache
-These caches will be re-created any time Azure Resource Manager-based template repositories are connected and spun up in the lab. They'll be refreshed with the latest content from the respective referenced repositories. So, if you delete this information to save Storage-related expenses, the relief will be temporary.
+The artifact results size can increase over time as artifacts are applied. You can set an expiration rule for artifact results to regularly delete older results from the storage account. This practice reduces storage account size and helps control costs.
-### Formulas
-These documents are used to support the option to both create formulas from existing VMs, and creating VMs from formulas. Deleting these formula documents may lead to errors while doing the following operations:
--- Creating a formula from an existing lab VM-- Creating or updating formulas -- Creating a VM from a formula.-
-### Artifact results
-As artifacts are applied, the size of the respective artifact results can increase over time depending on the number and type of artifacts being run on lab VMs. So, as a lab owner, you may want to control the lifecycle of such documents. For more information, see [Manage the Azure Blob storage lifecycle](../storage/blobs/lifecycle-management-overview.md).
-
-> [!IMPORTANT]
-> We recommend that you do this step to reduce expenses associated with the Azure Storage account.
-
-For example, the following rule is used to set a 90-day expiration rule specifically for artifact results. It ensures that older artifact results are recycled from the storage account on a regular cadence.
+The following rule sets a 90-day expiration specifically for artifact results:
```json {
For example, the following rule is used to set a 90-day expiration rule specific
} ```
+## Storage encryption and customer-managed keys
+
+Azure Storage automatically encrypts all data in the lab storage account. Azure Storage encryption protects your data and helps meet organizational security and compliance commitments. For more information, see [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md).
+
+Azure Storage encrypts lab data with a Microsoft-managed key. Optionally, you can manage encryption with your own keys. If you choose to manage lab storage account encryption with your own key, you can use Azure Key Vault to specify a customer-managed key for encrypting and decrypting data.
+
+For more information and instructions on configuring customer-managed keys for Azure Storage encryption, see:
+
+- [Use customer-managed keys with Azure Key Vault to manage Azure Storage encryption](/azure/storage/common/customer-managed-keys-overview.md)
+- [Configure encryption with customer-managed keys stored in Azure Key Vault](/azure/storage/common/customer-managed-keys-configure-key-vault)
+ ## Next steps
-To learn how to configure customer-managed keys for Azure Storage encryption, see the following articles:
-- [Azure portal](../storage/common/customer-managed-keys-configure-key-vault.md)-- [Azure PowerShell](../storage/common/customer-managed-keys-configure-key-vault.md)-- [Azure CLI](../storage/common/customer-managed-keys-configure-key-vault.md)
+For more information about managing Azure Storage, see [Optimize costs by automatically managing the data lifecycle](../storage/blobs/lifecycle-management-overview.md).
+
devtest-labs Start Machines Use Automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/start-machines-use-automation-runbooks.md
Title: Start machines using Automation runbooks
-description: Learn how to start virtual machines in a lab in Azure DevTest Labs by using Azure Automation runbooks.
+ Title: Define VM start order with Azure Automation
+description: Learn how to start virtual machines in a specific order by using Azure Automation runbooks in Azure DevTest Labs.
Previously updated : 06/26/2020 Last updated : 03/17/2022
-# Start virtual machines in a lab in order by using Azure Automation runbooks
-The [autostart](devtest-lab-set-lab-policy.md#set-autostart) feature of DevTest Labs allows you to configure VMs to start automatically at a specified time. However, this feature doesn't support machines to start in a specific order. There are several scenarios where this type of automation would be useful. One scenario is where a Jumpbox VM in a lab is the access point to the other VMs. The Jumpbox VM must start before the other VMs. This article shows you how to set up an Azure Automation account with a PowerShell runbook that executes a script. The script uses tags on VMs in the lab to allow you to control the startup order without having to change the script.
+# Define the startup order for DevTest Lab VMs with Azure Automation
-## Setup
-In this example, VMs in the lab need to have the tag **StartupOrder** added with the appropriate value, such as 0, 1, 2. Designate any machine that doesn't need starting as -1.
+This article explains how to start up DevTest Labs virtual machines (VMs) in a specific order by using a PowerShell runbook in Azure Automation. The PowerShell script uses tags on lab VMs, so you can change the startup order without having to change the script.
-## Create an Azure Automation account
-Create an Azure Automation account by following instructions in [this article](../automation/automation-create-standalone-account.md). Choose the **Run As Accounts** option when creating the account. Once the automation account is created, open the **Modules** page, and select **Update Azure Modules** on the menu bar. The default modules are several versions old and without the update the script may not function.
+The DevTest Labs [autostart](devtest-lab-set-lab-policy.md#set-autostart) feature can configure lab VMs to start automatically at a specified time. However, sometimes you might want lab VMs to start in a specific sequence. For example, if a jumpbox VM in a lab is the access point to the other VMs, the jumpbox VM must start before the other VMs.
-## Add a runbook
-Now, to add a runbook to the automation account, select **Runbooks** on the left menu. Select **Add a runbook** on the menu, and follow instructions to [create a PowerShell runbook](../automation/learn/powershell-runbook-managed-identity.md).
+## Prerequisites
-## PowerShell script
-The following script takes the subscription name, the lab name as parameters. The flow of the script is to get all the VMs in the lab, and then parse out the tag information to create a list of the VM names and their startup order. The script walks through the VMs in order and starts the VMs. If there are multiple VMs in a specific order number, they start asynchronously using PowerShell jobs. For those VMs that donΓÇÖt have a tag, set startup value to be the last (10). Those machines start last, by default. If you don't want the VM to be auto started, set the tag value to 11, and the script will ignore the VM.
+- [Create and apply a tag](devtest-lab-add-tag.md) called **StartupOrder** to all lab VMs with an appropriate startup value, 0 through 10. Designate any machines that don't need starting as -1.
+
+- Create an Azure Automation account by following instructions in [Create a standalone Azure Automation account](/azure/automation/automation-create-standalone-account). Choose the **Run As Accounts** option when you create the account.
+
+## Create the PowerShell runbook
+
+1. On the **Overview** page for the Automation Account, select **Runbooks** from the left menu.
+1. On the **Runbooks** page, select **Create a runbook**.
+1. Follow the instructions in [Create an Automation PowerShell runbook using managed identity](../automation/learn/powershell-runbook-managed-identity.md) to create a PowerShell runbook. Populate the runbook with the following PowerShell script.
+
+## Prepare the PowerShell script
+
+The following script takes the subscription name and the lab name as parameters. The script gets all the VMs in the lab and parses their tag information to create a list of VM names and their startup order. The script walks through the list in order and starts the VMs.
+
+If there are multiple VMs in a specific order number, those VMs start asynchronously using PowerShell jobs. VMs that don't have a tag have their startup value set to 10 and start last by default. The script ignores any VMs that have tag values other than 0 through 10.
```powershell #Requires -Version 3.0
$dtLab = Find-AzResource -ResourceType 'Microsoft.DevTestLab/labs' -ResourceName
$dtlAllVms = New-Object System.Collections.ArrayList $AllVMs = Get-AzResource -ResourceId "$($dtLab.ResourceId)/virtualmachines" -ApiVersion 2016-05-15
-# Get the StartupOrder tag, if missing set to be run last (10)
+# Get the StartupOrder tag. If missing, set to start up last (10).
ForEach ($vm in $AllVMs) { if ($vm.Tags) { if ($vm.Tags['StartupOrder']) {
While ($current -le 10) {
} ```
-## Create a schedule
-To have this script execute daily, [create a schedule](../automation/shared-resources/schedules.md#create-a-schedule) in the automation account. Once the schedule is created, [link it to the runbook](../automation/shared-resources/schedules.md#link-a-schedule-to-a-runbook).
+## Run the script
+
+- To run this script daily, [create a schedule](../automation/shared-resources/schedules.md#create-a-schedule) in the Automation Account, and [link the schedule to the runbook](../automation/shared-resources/schedules.md#link-a-schedule-to-a-runbook).
-In a large-scale situation that has multiple subscriptions with multiple labs, store the parameter information in a file for different labs. Pass the file to the script instead of passing the individual parameters. The script must be modified, but the core execution is the same. While this sample uses the Azure Automation to execute the PowerShell script, there are other options like using a task in a Build/Release pipeline.
+- In an enterprise scenario that has several subscriptions with multiple labs, you can store the parameter information for different labs and subscriptions in a file. Pass the file to the script instead of passing the individual parameters.
+
+- This example uses Azure Automation to run the PowerShell script, but you can also use other options, like a [build/release pipeline](use-devtest-labs-build-release-pipelines.md).
## Next steps
-See the following article to learn more about Azure Automation: [An introduction to Azure Automation](../automation/automation-intro.md).
+
+- [What is Azure Automation?](/azure/automation/automation-intro)
+- [Start up lab virtual machines automatically](devtest-lab-auto-startup-vm.md)
+- [Use command-line tools to start and stop Azure DevTest Labs virtual machines](use-command-line-start-stop-virtual-machines.md)
event-grid Advanced Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/advanced-filtering.md
Event Grid allows specifying filters on any property in the json payload. These
* `Value` - The reference value against which the filter is run (or) `Values` - The set of reference values against which the filter is run. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
## JSON syntax
event-grid Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/api.md
This article describes the REST APIs of Azure Event Grid on IoT Edge > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
## Common API behavior
event-grid Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/concepts.md
This article describes the main concepts in Azure Event Grid. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
## Events
event-grid Configure Api Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-api-protocol.md
This guide gives examples of the possible protocol configurations of an Event Gr
See [Security and authentication](security-authentication.md) guide for all the possible configurations. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
## Expose HTTPS to IoT Modules on the same edge network
event-grid Configure Client Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-client-auth.md
This guide gives examples of the possible client authentication configurations f
See [Security and authentication](security-authentication.md) guide for all the possible configurations. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Configure Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-event-grid.md
Event Grid provides many configurations that can be modified per environment. The following section is a reference to all the available options and their defaults. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Configure Identity Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-identity-auth.md
This article gives shows how to configure identity for Grid on Edge. By default,
See [Security and authentication](security-authentication.md) guide for all the possible configurations. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Configure Webhook Subscriber Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-webhook-subscriber-auth.md
This guide gives examples of the possible webhook subscriber configurations for an Event Grid module. By default, only HTTPS endpoints are accepted for webhook subscribers. The Event Grid module will reject if the subscriber presents a self-signed certificate. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Delivery Retry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/delivery-retry.md
Event Grid provides durable delivery. It tries to deliver each message at least once for each matching subscription immediately. If a subscriber's endpoint doesn't acknowledge receipt of an event or if there is a failure, Event Grid retries delivery based on a fixed **retry schedule** and **retry policy**. By default, the Event Grid module delivers one event at a time to the subscriber. The payload is however an array with a single event. You can have the module deliver more than one event at a time by enabling the output batching feature. For details about this feature, see [output batching](delivery-output-batching.md). > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
> [!IMPORTANT]
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/event-handlers.md
An event handler is the place where the event for further action or to process the event. With the Event Grid on Edge module, the event handler can be on the same edge device, another device, or in the cloud. You may can use any WebHook to handle events, or send events to one of the native handlers like Azure Event Grid. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Event Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/event-schemas.md
Subscribers can also configure the schema in which they want the events delivere
Currently subscriber delivery schema has to match its topic's input schema. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Forward Events Event Grid Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/forward-events-event-grid-cloud.md
This article walks through all the steps needed to forward edge events to Event
To complete this tutorial, you need have an understanding of Event Grid concepts on [edge](concepts.md) and [Azure](../concepts.md). For additional destination types, see [event handlers](event-handlers.md). > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Forward Events Iothub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/forward-events-iothub.md
This article walks through all the steps needed to forward Event Grid events to
> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
To complete this tutorial, you need to understand the following concepts:
event-grid Monitor Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/monitor-topics-subscriptions.md
Event Grid on Edge exposes a number of metrics for topics and event subscriptions in the [Prometheus exposition format](https://prometheus.io/docs/instrumenting/exposition_formats/). This article describes the available metrics and how to enable them. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Persist State Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/persist-state-windows.md
By default only metadata is persisted and events are still stored in-memory for
This article provides the steps needed to deploy Event Grid module with persistence in Windows deployments. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Pub Sub Events Webhook Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/pub-sub-events-webhook-cloud.md
This article walks through all the steps needed to publish and subscribe to even
See [Event Grid Concepts](concepts.md) to understand what an event grid topic and subscription are before proceeding. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Pub Sub Events Webhook Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/pub-sub-events-webhook-local.md
This article walks you through all the steps needed to publish and subscribe to events using Event Grid on IoT Edge. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid React Blob Storage Events Locally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/react-blob-storage-events-locally.md
This article shows you how to deploy the Azure Blob Storage on IoT module, which
For an overview of the Azure Blob Storage on IoT Edge, see [Azure Blob Storage on IoT Edge](../../iot-edge/how-to-store-data-blob.md) and its features. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/release-notes.md
# Release Notes: Azure Event Grid on IoT Edge > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Security Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/security-authentication.md
Security and authentication is an advanced concept and it requires familiarity with Event Grid basics first. Start [here](concepts.md) if you are new to Event Grid on IoT Edge. Event Grid module builds on the existing security infrastructure on IoT Edge. Refer to [this documentation](../../iot-edge/security.md) for details and setup. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
The following sections describe in detail how these settings are secured and authenticated:
event-grid Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/troubleshoot.md
If you experience issues using Azure Event Grid on IoT Edge in your environment, use this article as a guide for troubleshooting and resolution. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
event-grid Twin Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/twin-json.md
Event Grid on IoT Edge integrates with the IoT Edge ecosystem and supports creating topics and subscriptions via the Module Twin. It also reports the current state of all the topics and event subscriptions to the reported properties on the Module Twin. > [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge ](transition.md).
+> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
expressroute How To Configure Custom Bgp Communities Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-configure-custom-bgp-communities-portal.md
+
+ Title: 'Configure custom BGP communities for Azure ExpressRoute private peering using the Azure portal (Preview)'
+description: Learn how to apply or update BGP community value for a new or an existing virtual network using the Azure portal.
++++ Last updated : 1/25/2022+++
+# Configure custom BGP communities for Azure ExpressRoute private peering using the Azure portal (Preview)
+
+BGP communities are groupings of IP prefixes tagged with a community value. This value can be used to make routing decisions on the router's infrastructure. You can apply filters or specify routing preferences for traffic sent to your on-premises from Azure with BGP community tags. This article explains how to apply a custom BGP community value for your virtual networks using the Azure portal. Once configured, you can view the regional BGP community value and the custom community value of your virtual network. This value will be used for outbound traffic sent over ExpressRoute when originating from that virtual network.
+
+## Prerequisites
+
+* Review the [prerequisites](expressroute-prerequisites.md), [routing requirements](expressroute-routing.md), and [workflows](expressroute-workflows.md) before you begin configuration.
+
+* You must have an active ExpressRoute circuit.
+ * Follow the instructions to [create an ExpressRoute circuit](expressroute-howto-circuit-arm.md) and have the circuit enabled by your connectivity provider.
+ * Ensure that you have Azure private peering configured for your circuit. See the [configure routing](expressroute-howto-routing-arm.md) article for routing instructions.
+ * Ensure that Azure private peering gets configured and establishes BGP peering between your network and Microsoft for end-to-end connectivity.
+
+## Applying or updating the custom BGP value for an existing virtual network
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Select the virtual network that you want to update the BGP community value for.
+
+ :::image type="content" source="./media/how-to-configure-custom-bgp-communities-portal/virtual-network-list.png" alt-text="Screenshot of the list of virtual networks.":::
+
+1. Select the **configure** link below the *BGP community string*.
+
+ :::image type="content" source="./media/how-to-configure-custom-bgp-communities-portal/virtual-network-overview.png" alt-text="Screenshot of the overview page of a virtual network.":::
+
+1. On the *BGP community string* page, enter the BGP value you would like to configure this virtual network and then select **Save**.
+
+ :::image type="content" source="./media/how-to-configure-custom-bgp-communities-portal/bgp-community-value.png" alt-text="Screenshot of the BGP community string page.":::
+
+> [!IMPORTANT]
+> If your existing virtual network is already connected to an ExpressRoute circuit, you'll need to delete and recreate the ExpressRoute connection after applying the custom BGP community value. See [link a virtual network to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md), to learn how.
+>
+
+## Next steps
+
+- [Verify ExpressRoute connectivity](expressroute-troubleshooting-expressroute-overview.md).
+- [Troubleshoot your network performance](expressroute-troubleshooting-network-performance.md)
governance Explore Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/explore-resources.md
Resources
| limit 1 ```
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualMachines' | limit 1" ```
Resources
| summarize count() by location ```
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualMachines' | summarize count() by location" ```
Resources
| project name, resourceGroup ```
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualMachines' and properties.hardwareProfile.vmSize == 'Standard_B2s' | project name, resourceGroup" ```
Resources
| project disk.id ```
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualmachines' and properties.hardwareProfile.vmSize == 'Standard_B2s' | extend disk = properties.storageProfile.osDisk.managedDisk | where disk.storageAccountType == 'Premium_LRS' | project disk.id" ```
record would be returned and the **type** property on it provides that detail.
> [!NOTE] > For this example to work, you must replace the ID field with a result from your own environment.
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'Microsoft.Compute/disks' and id == '/subscriptions/<subscriptionId>/resourceGroups/MyResourceGroup/providers/Microsoft.Compute/disks/ContosoVM1_OsDisk_1_9676b7e1b3c44e2cb672338ebe6f5166'" ```
virtual machines. Then the queries use the list of NICs to find each IP address
public IP address and store those values. Finally, the queries provide a list of the public IP addresses.
-```azurecli-interactive
+```azurecli
# Use Resource Graph to get all NICs and store in the 'nics.txt' file az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualMachines' | project nic = tostring(properties['networkProfile']['networkInterfaces'][0]['id']) | where isnotempty(nic) | distinct nic | limit 20" --output table | tail -n +3 > nics.txt
$nics.nic
Use the file (Azure CLI) or variable (Azure PowerShell) in the next query to get the related network interface resources details where there's a public IP address attached to the NIC.
-```azurecli-interactive
+```azurecli
# Use Resource Graph with the 'nics.txt' file to get all related public IP addresses and store in 'publicIp.txt' file az graph query -q="Resources | where type =~ 'Microsoft.Network/networkInterfaces' | where id in ('$(awk -vORS="','" '{print $0}' nics.txt | sed 's/,$//')') | project publicIp = tostring(properties['ipConfigurations'][0]['properties']['publicIPAddress']['id']) | where isnotempty(publicIp) | distinct publicIp" --output table | tail -n +3 > ips.txt
$ips.publicIp
Last, use the list of public IP address resources stored in the file (Azure CLI) or variable (Azure PowerShell) to get the actual public IP address from the related object and display.
-```azurecli-interactive
+```azurecli
# Use Resource Graph with the 'ips.txt' file to get the IP address of the public IP address resources az graph query -q="Resources | where type =~ 'Microsoft.Network/publicIPAddresses' | where id in ('$(awk -vORS="','" '{print $0}' ips.txt | sed 's/,$//')') | project ip = tostring(properties['ipAddress']) | where isnotempty(ip) | distinct ip" --output table ```
governance Work With Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/work-with-data.md
of the data set instead.
The following examples show how to skip the first _10_ records a query would result in, instead starting the returned result set with the 11th record:
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | project name | order by name asc" --skip 10 --output table ```
consumer if there are more records not returned in the response. This condition
identified when the **count** property is less than the **totalRecords** property. **totalRecords** defines how many records that match the query.
-**resultTruncated** is **true** when there are less resources available than a query is requesting or when paging is disabled or when paging is not possible because:
+**resultTruncated** is **true** when there are less resources available than a query is requesting or when paging is disabled or when paging is not possible because:
- The query contains a `limit` or `sample`/`take` operator. - **All** output columns are either `dynamic` or `null` type.
When **resultTruncated** is **true**, the **$skipToken** property isn't set.
The following examples show how to **skip** the first 3,000 records and return the **first** 1,000 records after those records skipped with Azure CLI and Azure PowerShell:
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | project id, name | order by id asc" --first 1000 --skip 3000 ```
Search-AzGraph -Query "Resources | project id, name | order by id asc" -First 10
> [!IMPORTANT] > The response won't include the **$skipToken** if: > - The query contains a `limit` or `sample`/`take` operator.
-> - **All** output columns are either `dynamic` or `null` type.
+> - **All** output columns are either `dynamic` or `null` type.
For an example, see [Next page query](/rest/api/azureresourcegraph/resourcegraph(2021-03-01)/resources/resources#next-page-query)
governance First Query Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-azurecli.md
and run your first Resource Graph query.
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+<!-- [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)] -->
## Add the Resource Graph extension
Docker image](https://hub.docker.com/_/microsoft-azure-cli), or locally installe
1. In your Azure CLI environment of choice, import it with the following command:
- ```azurecli-interactive
+ ```azurecli
# Add the Resource Graph extension to the Azure CLI environment az extension add --name resource-graph ``` 1. Validate that the extension has been installed and is the expected version (at least **1.0.0**):
- ```azurecli-interactive
+ ```azurecli
# Check the extension list (note that you may have other extensions installed) az extension list
or `--subscriptions` arguments.
1. Run your first Azure Resource Graph query using the `graph` extension and `query` command:
- ```azurecli-interactive
+ ```azurecli
# Login first with az login if not using Cloud Shell # Run Azure Resource Graph query
or `--subscriptions` arguments.
1. Update the query to `order by` the **Name** property:
- ```azurecli-interactive
+ ```azurecli
# Run Azure Resource Graph query with 'order by' az graph query -q 'Resources | project name, type | limit 5 | order by name asc' ```
or `--subscriptions` arguments.
1. Update the query to first `order by` the **Name** property and then `limit` to the top five results:
- ```azurecli-interactive
+ ```azurecli
# Run Azure Resource Graph query with `order by` first, then with `limit` az graph query -q 'Resources | project name, type | order by name asc | limit 5' ```
top five results.
If you wish to remove the Resource Graph extension from your Azure CLI environment, you can do so by using the following command:
-```azurecli-interactive
+```azurecli
# Remove the Resource Graph extension from the Azure CLI environment az extension remove -n resource-graph ```
governance Get Resource Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/how-to/get-resource-changes.md
or `-Subscription` parameters.
1. Run your first Azure Resource Graph query: # [Azure CLI](#tab/azure-cli)
- ```azurecli-interactive
+ ```azurecli
# Login first with az login if not using Cloud Shell # Run Azure Resource Graph query
or `-Subscription` parameters.
2. Update the query to specify a more user-friendly column name for the **timestamp** property: # [Azure CLI](#tab/azure-cli)
- ```azurecli-interactive
+ ```azurecli
# Run Azure Resource Graph query with 'extend' to define a user-friendly name for properties.changeAttributes.timestamp az graph query -q 'resourcechanges | extend changeTime=todatetime(properties.changeAttributes.timestamp) | project changeTime, properties.changeType, properties.targetResourceId, properties.targetResourceType, properties.changes | limit 5' ```
or `-Subscription` parameters.
3. To get the most recent changes, update the query to `order by` the user-defined **changeTime** property: # [Azure CLI](#tab/azure-cli)
- ```azurecli-interactive
+ ```azurecli
# Run Azure Resource Graph query with 'order by' az graph query -q 'resourcechanges | extend changeTime=todatetime(properties.changeAttributes.timestamp) | project changeTime, properties.changeType, properties.targetResourceId, properties.targetResourceType, properties.changes | order by changeTime desc | limit 5' ```
governance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/advanced.md
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | distinct type, apiVersion | where isnotnull(apiVersion) | order by type asc" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type=~ 'microsoft.compute/virtualmachinescalesets' | where name contains 'contoso' | project subscriptionId, name, location, resourceGroup, Capacity = toint(sku.capacity), Tier = sku.name | order by Capacity desc" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | summarize resourceCount=count() by subscriptionId | join (ResourceContainers | where type=='microsoft.resources/subscriptions' | project SubName=name, subscriptionId) on subscriptionId| project-away subscriptionId, subscriptionId1" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | project tags | summarize buildschema(tags)" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'microsoft.compute/virtualmachines' and name matches regex @'^Contoso(.*)[0-9]+\$' | project name | order by name asc" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'microsoft.documentdb/databaseaccounts' | project id, name, writeLocations = (properties.writeLocations) | mv-expand writeLocations | project id, name, writeLocation = tostring(writeLocations.locationName) | where writeLocation in ('East US', 'West US') | summarize by id, name" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | join kind=leftouter (ResourceContainers | where type=='microsoft.resources/subscriptions' | project SubName=name, subscriptionId) on subscriptionId | where type == 'microsoft.keyvault/vaults' | project type, name, SubName" ```
on elasticPoolId
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'microsoft.sql/servers/databases' | project databaseId = id, databaseName = name, elasticPoolId = tolower(tostring(properties.elasticPoolId)) | join kind=leftouter ( Resources | where type =~ 'microsoft.sql/servers/elasticpools' | project elasticPoolId = tolower(id), elasticPoolName = name, elasticPoolState = properties.state) on elasticPoolId | project-away elasticPoolId1" ```
on publicIpId
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'microsoft.compute/virtualmachines' | extend nics=array_length(properties.networkProfile.networkInterfaces) | mv-expand nic=properties.networkProfile.networkInterfaces | where nics == 1 or nic.properties.primary =~ 'true' or isempty(nic) | project vmId = id, vmName = name, vmSize=tostring(properties.hardwareProfile.vmSize), nicId = tostring(nic.id) | join kind=leftouter ( Resources | where type =~ 'microsoft.network/networkinterfaces' | extend ipConfigsCount=array_length(properties.ipConfigurations) | mv-expand ipconfig=properties.ipConfigurations | where ipConfigsCount == 1 or ipconfig.properties.primary =~ 'true' | project nicId = id, publicIpId = tostring(ipconfig.properties.publicIPAddress.id)) on nicId | project-away nicId1 | summarize by vmId, vmName, vmSize, nicId, publicIpId | join kind=leftouter ( Resources | where type =~ 'microsoft.network/publicipaddresses' | project publicIpId = id, publicIpAddress = properties.ipAddress) on publicIpId | project-away publicIpId1" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type == 'microsoft.compute/virtualmachines' | extend JoinID = toupper(id), OSName = tostring(properties.osProfile.computerName), OSType = tostring(properties.storageProfile.osDisk.osType), VMSize = tostring(properties.hardwareProfile.vmSize) | join kind=leftouter( Resources | where type == 'microsoft.compute/virtualmachines/extensions' | extend VMId = toupper(substring(id, 0, indexof(id, '/extensions'))), ExtensionName = name ) on \$left.JoinID == \$right.VMId | summarize Extensions = make_list(ExtensionName) by id, OSName, OSType, VMSize | order by tolower(OSName) asc" ```
on subscriptionId, resourceGroup
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'microsoft.storage/storageaccounts' | join kind=inner ( ResourceContainers | where type =~ 'microsoft.resources/subscriptions/resourcegroups' | where tags['Key1'] =~ 'Value1' | project subscriptionId, resourceGroup) on subscriptionId, resourceGroup | project-away subscriptionId1, resourceGroup1" ```
on subscriptionId, resourceGroup
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'microsoft.storage/storageaccounts' | join kind=inner ( ResourceContainers | where type =~ 'microsoft.resources/subscriptions/resourcegroups' | mv-expand bagexpansion=array tags | where isnotempty(tags) | where tags[0] =~ 'key1' and tags[1] =~ 'value1' | project subscriptionId, resourceGroup) on subscriptionId, resourceGroup | project-away subscriptionId1, resourceGroup1" ```
ResourceContainers
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "ResourceContainers | where type=='microsoft.resources/subscriptions/resourcegroups' | project name, type | limit 5 | union (Resources | project name, type | limit 5)" ```
Resources
| project id, ipConfigurations = properties.ipConfigurations | mvexpand ipConfigurations | project id, subnetId = tostring(ipConfigurations.properties.subnet.id)
-| parse kind=regex subnetId with '/virtualNetworks/' virtualNetwork '/subnets/' subnet
+| parse kind=regex subnetId with '/virtualNetworks/' virtualNetwork '/subnets/' subnet
| project id, virtualNetwork, subnet ``` # [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'microsoft.network/networkinterfaces' | project id, ipConfigurations = properties.ipConfigurations | mvexpand ipConfigurations | project id, subnetId = tostring(ipConfigurations.properties.subnet.id) | parse kind=regex subnetId with '/virtualNetworks/' virtualNetwork '/subnets/' subnet | project id, virtualNetwork, subnet" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type == 'microsoft.compute/virtualmachines' | summarize count() by tostring(properties.extended.instanceView.powerState.code)" ```
governance Starter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/starter.md
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | summarize count()" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'microsoft.keyvault/vaults' | count" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | project name, type, location | order by name asc" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | project name, location, type| where type =~ 'Microsoft.Compute/virtualMachines' | order by name desc" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualMachines' | project name, properties.storageProfile.osDisk.osType | top 5 by name desc" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualMachines' | summarize count() by tostring(properties.storageProfile.osDisk.osType)" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'Microsoft.Compute/virtualMachines' | extend os = properties.storageProfile.osDisk.osType | summarize count() by tostring(os)" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type contains 'storage' | distinct type" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type contains 'publicIPAddresses' and isnotempty(properties.ipAddress) | project properties.ipAddress | limit 100" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type contains 'publicIPAddresses' and isnotempty(properties.ipAddress) | summarize count () by subscriptionId" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where tags.environment=~'internal' | project name" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where tags.environment=~'internal' | project name, tags" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'Microsoft.Storage/storageAccounts' | where tags['tag with a space']=='Custom value'" ```
ResourceContainers
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "ResourceContainers | where isnotempty(tags) | project tags | mvexpand tags | extend tagKey = tostring(bag_keys(tags)[0]) | extend tagValue = tostring(tags[tagKey]) | union (resources | where notempty(tags) | project tags | mvexpand tags | extend tagKey = tostring(bag_keys(tags)[0]) | extend tagValue = tostring(tags[tagKey]) ) | distinct tagKey, tagValue | where tagKey !startswith "hidden-"" ```
Resources
# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az graph query -q "Resources | where type =~ 'microsoft.network/networksecuritygroups' and isnull(properties.networkInterfaces) and isnull(properties.subnets) | project name, resourceGroup | sort by name asc" ```
governance Shared Query Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-azure-cli.md
and create a Resource Graph shared query.
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+<!-- [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)] -->
## Add the Resource Graph extension
Docker image](https://hub.docker.com/_/microsoft-azure-cli), or locally installe
[az extension add](/cli/azure/extension#az_extension_add) to import the Resource Graph extension with the following command:
- ```azurecli-interactive
+ ```azurecli
# Add the Resource Graph extension to the Azure CLI environment az extension add --name resource-graph ```
Docker image](https://hub.docker.com/_/microsoft-azure-cli), or locally installe
1. Validate that the extension has been installed and is the expected version (at least **1.1.0**) with [az extension list](/cli/azure/extension#az_extension_list):
- ```azurecli-interactive
+ ```azurecli
# Check the extension list (note that you may have other extensions installed) az extension list
_location_.
Azure Resource Graph shared query. This resource group is named `resource-graph-queries` and the location is `westus2`.
- ```azurecli-interactive
+ ```azurecli
# Login first with az login if not using Cloud Shell # Create the resource group
_location_.
[az graph shared-query create](/cli/azure/graph/shared-query#az_graph_shared_query_create) command:
- ```azurecli-interactive
+ ```azurecli
# Create the Azure Resource Graph shared query az graph shared-query create --name 'Summarize resources by location' \ --description 'This shared query summarizes resources by location for a pinnable map graphic.' \
_location_.
[az graph shared-query list](/cli/azure/graph/shared-query#az_graph_shared_query_list) command returns an array of values.
- ```azurecli-interactive
+ ```azurecli
# List all the Azure Resource Graph shared queries in a resource group az graph shared-query list --resource-group 'resource-graph-queries' ```
_location_.
[az graph shared-query show](/cli/azure/graph/shared-query#az_graph_shared_query_show) command.
- ```azurecli-interactive
+ ```azurecli
# Show a specific Azure Resource Graph shared query az graph shared-query show --resource-group 'resource-graph-queries' \ --name 'Summarize resources by location'
_location_.
`shared-query-uri` text in the example with the value from the `id` field, but leave the surrounding `{{` and `}}` characters.
- ```azurecli-interactive
+ ```azurecli
# Run a Azure Resource Graph shared query az graph query --graph-query "{{shared-query-uri}}" ```
CLI environment, you can do so by using the following commands:
- [az group delete](/cli/azure/group#az_group_delete) - [az extension remove](/cli/azure/extension#az_extension_remove)
-```azurecli-interactive
+```azurecli
# Delete the Azure Resource Graph shared query az graph shared-query delete --resource-group 'resource-graph-queries' \ --name 'Summarize resources by location'
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Previously updated : 03/15/2022 Last updated : 03/17/2022
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
|`_sort` can cause `ChainedSearch` to return incorrect results |Previously, the sort options from the chained search's `SearchOption` object was not cleared, causing the sorting options to be passed through to the chained sub-search, which are not valid. This could result in no results when there should be results. This bug is now fixed [#2347](https://github.com/microsoft/fhir-server/pull/2347). It addressed GitHub bug [#2344](https://github.com/microsoft/fhir-server/issues/2344). | - ## November 2021 ### **Features and enhancements**
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
For information about the features and bug fixes in Azure Healthcare APIs (FHIR service, DICOM service, and IoT connector), see >[!div class="nextstepaction"]
->[Release notes: Azure Healthcare APIs](../release-notes.md)
+>[Release notes: Azure Health Data Services](../release-notes.md)
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Previously updated : 03/15/2022 Last updated : 03/17/2022
Azure Health Data Services is a set of managed API services based on open standa
## January 2022
-### Azure Health Data Services
- ### **Features and enhancements** |Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
iot-dps Concepts Device Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-device-reprovision.md
When designing your solution and defining a reprovisioning logic there are a few
* Retry capability implemented on your client code, as described on the [Retry general guidance](/architecture/best-practices/transient-faults) at the Azure Architecture Center >[!TIP]
-> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to [get the device registration state](/rest/api/iot-dps/service/device-registration-state/get) and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/architecture/best-practices/transient-faults).
+> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to use the [Device Registration Status Lookup](/rest/api/iot-dps/service/runtime-registration/device-registration-status-lookup) API and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/architecture/best-practices/transient-faults).
>In some cases, depending on the device capabilities, itΓÇÖs possible to save the IoT Hub information directly on the device to connect directly to IoT Hub after the first-time provisioning using DPS occurred. If you choose to do this, make sure you implement a fallback mechanism in case you get specific [errors from Hub occur](../iot-hub/troubleshoot-message-routing.md#common-error-codes), for example, consider the following scenarios: > * Retry the Hub operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors. > * For 429 errors, only retry after the time indicated in the Retry-After header.
When designing your solution and defining a reprovisioning logic there are a few
> > We also recommend taking into account the service limits when planning activities like pushing updates to your fleet. For example, updating the fleet all at once could cause all devices to re-register through DPS (which could easily be above the registration quota limit) - For such scenarios, consider planning for device updates in phases instead of updating your entire fleet at the same time.
->[!Note]
-> The [get device registration state API](/rest/api/iot-dps/service/device-registration-state/get) does not currently work for TPM devices (the API surface does not include enough information to authenticate the request).
- ### Managing backwards compatibility
iot-dps How To Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-reprovision.md
How often a device submits a provisioning request depends on the scenario. When
* Retry capability implemented on your client code, as described on the [Retry general guidance](/architecture/best-practices/transient-faults) at the Azure Architecture Center >[!TIP]
-> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to [get the device registration state](/rest/api/iot-dps/service/device-registration-state/get) and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/architecture/best-practices/transient-faults).
+> We recommend not provisioning on every reboot of the device, as this could cause some issues when reprovisioning several thousands or millions of devices at once. Instead you should attempt to use the [Device Registration Status Lookup](/rest/api/iot-dps/service/runtime-registration/device-registration-status-lookup) API and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/architecture/best-practices/transient-faults).
>In some cases, depending on the device capabilities, itΓÇÖs possible to save the IoT Hub information directly on the device to connect directly to IoT Hub after the first-time provisioning using DPS occurred. If you choose to do this, make sure you implement a fallback mechanism in case you get specific [errors from Hub occur](../iot-hub/troubleshoot-message-routing.md#common-error-codes), for example, consider the following scenarios: > * Retry the Hub operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors. > * For 429 errors, only retry after the time indicated in the Retry-After header.
How often a device submits a provisioning request depends on the scenario. When
> > We also recommend taking into account the service limits when planning activities like pushing updates to your fleet. For example, updating the fleet all at once could cause all devices to re-register through DPS (which could easily be above the registration quota limit) - For such scenarios, consider planning for device updates in phases instead of updating your entire fleet at the same time.
->[!Note]
-> The [get device registration state API](/rest/api/iot-dps/service/device-registration-state/get) does not currently work for TPM devices (the API surface does not include enough information to authenticate the request).
## Next steps
key-vault Secrets Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/secrets-best-practices.md
For more information about best practices for Key Vault, see [Best practices to
## Configuration and storing
-Store the dynamic parts of credentials, which are generated during rotation, as values. Examples include client application secrets, passwords, and access keys. Any related static attributes and identifiers, like usernames, application IDs, and service URLs, should be stored as secret tags and copied to the new version of a secret during rotation.
+Store credential information required to access database or service in secret value. In the case of compound credentials like username/password, it can be stored as a connection string or JSON object. Other information required for management should be stored in tags, i.e., rotation configuration.
For more information about secrets, see [About Azure Key Vault secrets](about-secrets.md).
load-balancer Quickstart Load Balancer Standard Public Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md
#Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs.
-# Quickstart: Create a public load balancer to load balance VMs using Azure CLI
+# Quickstart: Create a public load balancer to load balance VMs using the Azure CLI
-Get started with Azure Load Balancer by using Azure CLI to create a public load balancer and two virtual machines.
+Get started with Azure Load Balancer by using the Azure CLI to create a public load balancer and two virtual machines.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
load-balancer Quickstart Load Balancer Standard Public Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-powershell.md
Title: 'Quickstart: Create a public load balancer - Azure PowerShell' description: This quickstart shows how to create a load balancer using Azure PowerShell- - Previously updated : 11/22/2020 Last updated : 03/17/2022 - #Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs. # Quickstart: Create a public load balancer to load balance VMs using Azure PowerShell
-Get started with Azure Load Balancer by using Azure PowerShell to create a public load balancer and three virtual machines.
+Get started with Azure Load Balancer by using Azure PowerShell to create a public load balancer and two virtual machines.
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+ - Azure PowerShell installed locally or Azure Cloud Shell If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
An Azure resource group is a logical container into which Azure resources are de
Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup): ```azurepowershell-interactive
-New-AzResourceGroup -Name 'CreatePubLBQS-rg' -Location 'eastus'
-
+$rg = @{
+ Name = 'CreatePubLBQS-rg'
+ Location = 'eastus'
+}
+New-AzResourceGroup @rg
```-
-# [**Standard SKU**](#tab/option-1-create-load-balancer-standard)
-
->[!NOTE]
->Standard SKU load balancer is recommended for production workloads. For more information about skus, see **[Azure Load Balancer SKUs](skus.md)**.
--
-## Create a public IP address - Standard
+## Create a public IP address
Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a public IP address.
$publicip = @{
Zone = 1,2,3 } New-AzPublicIpAddress @publicip- ``` To create a zonal public IP address in zone 1, use the following command:
$publicip = @{
Zone = 1 } New-AzPublicIpAddress @publicip- ```
-## Create standard load balancer
+## Create a load balancer
This section details how you can create and configure the following components of the load balancer: * Create a front-end IP with [New-AzLoadBalancerFrontendIpConfig](/powershell/module/az.network/new-azloadbalancerfrontendipconfig) for the frontend IP pool. This IP receives the incoming traffic on the load balancer
-* Create a back-end address pool with [New-AzLoadBalancerBackendAddressPoolConfig](/powershell/module/az.network/new-azloadbalancerbackendaddresspoolconfig) for traffic sent from the frontend of the load balancer. This pool is where your backend virtual machines are deployed.
+* Create a back-end address pool with [New-AzLoadBalancerBackendAddressPoolConfig](/powershell/module/az.network/new-azloadbalancerbackendaddresspoolconfig) for traffic sent from the frontend of the load balancer. This pool is where your backend virtual machines are deployed
-* Create a health probe with [Add-AzLoadBalancerProbeConfig](/powershell/module/az.network/add-azloadbalancerprobeconfig) that determines the health of the backend VM instances.
+* Create a health probe with [Add-AzLoadBalancerProbeConfig](/powershell/module/az.network/add-azloadbalancerprobeconfig) that determines the health of the backend VM instances
-* Create a load balancer rule with [Add-AzLoadBalancerRuleConfig](/powershell/module/az.network/add-azloadbalancerruleconfig) that defines how traffic is distributed to the VMs.
-
-* Create a public load balancer with [New-AzLoadBalancer](/powershell/module/az.network/new-azloadbalancer).
+* Create a load balancer rule with [Add-AzLoadBalancerRuleConfig](/powershell/module/az.network/add-azloadbalancerruleconfig) that defines how traffic is distributed to the VMs
+* Create a public load balancer with [New-AzLoadBalancer](/powershell/module/az.network/new-azloadbalancer)
```azurepowershell-interactive ## Place public IP created in previous steps into variable. ##
-$publicIp = Get-AzPublicIpAddress -Name 'myPublicIP' -ResourceGroupName 'CreatePubLBQS-rg'
+$pip = @{
+ Name = 'myPublicIP'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+}
+$publicIp = Get-AzPublicIpAddress @pip
## Create load balancer frontend configuration and place in variable. ##
-$feip = New-AzLoadBalancerFrontendIpConfig -Name 'myFrontEnd' -PublicIpAddress $publicIp
+$fip = @{
+ Name = 'myFrontEnd'
+ PublicIpAddress = $publicIp
+}
+$feip = New-AzLoadBalancerFrontendIpConfig @fip
## Create backend address pool configuration and place in variable. ## $bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'myBackEndPool'
$loadbalancer = @{
Probe = $healthprobe } New-AzLoadBalancer @loadbalancer- ```
-## Configure virtual network - Standard
+## Configure virtual network
Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
Create a virtual network for the backend virtual machines.
Create a network security group to define inbound connections to your virtual network.
-### Create virtual network, network security group, and bastion host
+Create an Azure Bastion host to securely manage the virtual machines in the backend pool.
-* Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork).
+Use a NAT gateway to provide outbound internet access to resources in the backend pool of your load balancer.
-* Create a network security group rule with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig).
+### Create virtual network, network security group, bastion host, and NAT gateway
-* Create an Azure Bastion host with [New-AzBastion](/powershell/module/az.network/new-azbastion).
+* Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork)
-* Create a network security group with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup).
+* Create a network security group rule with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig)
-```azurepowershell-interactive
-## Create backend subnet config ##
-$subnet = @{
- Name = 'myBackendSubnet'
- AddressPrefix = '10.1.0.0/24'
-}
-$subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet
-
-## Create Azure Bastion subnet. ##
-$bastsubnet = @{
- Name = 'AzureBastionSubnet'
- AddressPrefix = '10.1.1.0/24'
-}
-$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig @bastsubnet
-
-## Create the virtual network ##
-$net = @{
- Name = 'myVNet'
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
- AddressPrefix = '10.1.0.0/16'
- Subnet = $subnetConfig,$bastsubnetConfig
-}
-$vnet = New-AzVirtualNetwork @net
+* Create an Azure Bastion host with [New-AzBastion](/powershell/module/az.network/new-azbastion)
-## Create public IP address for bastion host. ##
-$ip = @{
- Name = 'myBastionIP'
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
- Sku = 'Standard'
- AllocationMethod = 'Static'
-}
-$publicip = New-AzPublicIpAddress @ip
+* Create a network security group with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup)
-## Create bastion host ##
-$bastion = @{
- ResourceGroupName = 'CreatePubLBQS-rg'
- Name = 'myBastion'
- PublicIpAddress = $publicip
- VirtualNetwork = $vnet
-}
-New-AzBastion @bastion -AsJob
+* Create the NAT gateway resource with [New-AzNatGateway](/powershell/module/az.network/new-aznatgateway)
-## Create rule for network security group and place in variable. ##
-$nsgrule = @{
- Name = 'myNSGRuleHTTP'
- Description = 'Allow HTTP'
- Protocol = '*'
- SourcePortRange = '*'
- DestinationPortRange = '80'
- SourceAddressPrefix = 'Internet'
- DestinationAddressPrefix = '*'
- Access = 'Allow'
- Priority = '2000'
- Direction = 'Inbound'
-}
-$rule1 = New-AzNetworkSecurityRuleConfig @nsgrule
-
-## Create network security group ##
-$nsg = @{
- Name = 'myNSG'
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
- SecurityRules = $rule1
-}
-New-AzNetworkSecurityGroup @nsg
-
-```
-
-## Create virtual machines - Standard
-
-In this section, you'll create the three virtual machines for the backend pool of the load balancer.
-
-* Create three network interfaces with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface).
-
-* Set an administrator username and password for the VMs with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential).
-
-* Create the virtual machines with:
- * [New-AzVM](/powershell/module/az.compute/new-azvm)
- * [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
- * [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
- * [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
- * [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
+* Use [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to associate the NAT gateway to the subnet of the virtual network
```azurepowershell-interactive
-# Set the administrator and password for the VMs. ##
-$cred = Get-Credential
-
-## Place the virtual network into a variable. ##
-$vnet = Get-AzVirtualNetwork -Name 'myVNet' -ResourceGroupName 'CreatePubLBQS-rg'
-
-## Place the load balancer into a variable. ##
-$lb = @{
- Name = 'myLoadBalancer'
- ResourceGroupName = 'CreatePubLBQS-rg'
-}
-$bepool = Get-AzLoadBalancer @lb | Get-AzLoadBalancerBackendAddressPoolConfig
-
-## Place the network security group into a variable. ##
-$nsg = Get-AzNetworkSecurityGroup -Name 'myNSG' -ResourceGroupName 'CreatePubLBQS-rg'
-
-## For loop with variable to create virtual machines for load balancer backend pool. ##
-for ($i=1; $i -le 3; $i++)
-{
-## Command to create network interface for VMs ##
-$nic = @{
- Name = "myNicVM$i"
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
- Subnet = $vnet.Subnets[0]
- NetworkSecurityGroup = $nsg
- LoadBalancerBackendAddressPool = $bepool
-}
-$nicVM = New-AzNetworkInterface @nic
-
-## Create a virtual machine configuration for VMs ##
-$vmsz = @{
- VMName = "myVM$i"
- VMSize = 'Standard_DS1_v2'
-}
-$vmos = @{
- ComputerName = "myVM$i"
- Credential = $cred
-}
-$vmimage = @{
- PublisherName = 'MicrosoftWindowsServer'
- Offer = 'WindowsServer'
- Skus = '2019-Datacenter'
- Version = 'latest'
-}
-$vmConfig = New-AzVMConfig @vmsz `
- | Set-AzVMOperatingSystem @vmos -Windows `
- | Set-AzVMSourceImage @vmimage `
- | Add-AzVMNetworkInterface -Id $nicVM.Id
-
-## Create the virtual machine for VMs ##
-$vm = @{
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
- VM = $vmConfig
- Zone = "$i"
-}
-New-AzVM @vm -AsJob
-}
-
-```
-
-The deployments of the virtual machines and bastion host are submitted as PowerShell jobs. To view the status of the jobs, use [Get-Job](/powershell/module/microsoft.powershell.core/get-job):
-
-```azurepowershell-interactive
-Get-Job
-
-Id Name PSJobTypeName State HasMoreData Location Command
- - -- -- -- -
-1 Long Running O… AzureLongRunni… Completed True localhost New-AzBastion
-2 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
-3 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
-4 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
-```
--
-## Create outbound rule configuration
-Load balancer outbound rules configure outbound source network address translation (SNAT) for VMs in the backend pool.
-
-For more information on outbound connections, see [Outbound connections in Azure](load-balancer-outbound-connections.md).
-
-### Create outbound public IP address
-
-Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a standard zone redundant public IP address named **myPublicIPOutbound**.
-
-```azurepowershell-interactive
-$publicipout = @{
- Name = 'myPublicIPOutbound'
+## Create public IP address for NAT gateway ##
+$ip = @{
+ Name = 'myNATgatewayIP'
ResourceGroupName = 'CreatePubLBQS-rg' Location = 'eastus' Sku = 'Standard'
- AllocationMethod = 'static'
- Zone = 1,2,3
+ AllocationMethod = 'Static'
}
-New-AzPublicIpAddress @publicipout
+$publicIP = New-AzPublicIpAddress @ip
-```
-
-To create a zonal public IP address in zone 1, use the following command:
-
-```azurepowershell-interactive
-$publicipout = @{
- Name = 'myPublicIPOutbound'
+## Create NAT gateway resource ##
+$nat = @{
ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
+ Name = 'myNATgateway'
+ IdleTimeoutInMinutes = '10'
Sku = 'Standard'
- AllocationMethod = 'static'
- Zone = 1
-}
-New-AzPublicIpAddress @publicipout
-
-```
-
-### Create outbound configuration
-
-* Create a new frontend IP configuration with [Add-AzLoadBalancerFrontendIpConfig](/powershell/module/az.network/add-azloadbalancerfrontendipconfig).
-
-* Create a new outbound backend address pool with [Add-AzLoadBalancerBackendAddressPoolConfig](/powershell/module/az.network/add-azloadbalancerbackendaddresspoolconfig).
-
-* Apply the pool and frontend IP address to the load balancer with [Set-AzLoadBalancer](/powershell/module/az.network/set-azloadbalancer).
-* Create a new outbound rule for the outbound backend pool with [Add-AzLoadBalancerOutboundRuleConfig](/powershell/module/az.network/new-azloadbalanceroutboundruleconfig).
-
-```azurepowershell-interactive
-## Place public IP created in previous steps into variable. ##
-$pubip = @{
- Name = 'myPublicIPOutbound'
- ResourceGroupName = 'CreatePubLBQS-rg'
-}
-$publicIp = Get-AzPublicIpAddress @pubip
-
-## Get the load balancer configuration ##
-$lbc = @{
- ResourceGroupName = 'CreatePubLBQS-rg'
- Name = 'myLoadBalancer'
-}
-$lb = Get-AzLoadBalancer @lbc
-
-## Create the frontend configuration ##
-$fe = @{
- Name = 'myFrontEndOutbound'
- PublicIPAddress = $publicIP
-}
-$lb | Add-AzLoadBalancerFrontendIPConfig @fe | Set-AzLoadBalancer
-
-## Create the outbound backend address pool ##
-$be = @{
- Name = 'myBackEndPoolOutbound'
-}
-$lb | Add-AzLoadBalancerBackendAddressPoolConfig @be | Set-AzLoadBalancer
-
-## Apply the outbound rule configuration to the load balancer. ##
-$rule = @{
- Name = 'myOutboundRule'
- AllocatedOutboundPort = '10000'
- Protocol = 'All'
- IdleTimeoutInMinutes = '15'
- FrontendIPConfiguration = $lb.FrontendIpConfigurations[1]
- BackendAddressPool = $lb.BackendAddressPools[1]
-}
-$lb | Add-AzLoadBalancerOutBoundRuleConfig @rule | Set-AzLoadBalancer
-
-```
-
-### Add virtual machines to outbound pool
-
-Add the virtual machine network interfaces to the outbound pool of the load balancer with [Add-AzNetworkInterfaceIpConfig](/powershell/module/az.network/add-aznetworkinterfaceipconfig):
-
-```azurepowershell-interactive
-## Get the load balancer configuration ##
-$lbc = @{
- ResourceGroupName = 'CreatePubLBQS-rg'
- Name = 'myLoadBalancer'
-}
-$lb = Get-AzLoadBalancer @lbc
-
-# For loop with variable to add virtual machines to backend outbound pool. ##
-for ($i=1; $i -le 3; $i++)
-{
-$nic = @{
- ResourceGroupName = 'CreatePubLBQS-rg'
- Name = "myNicVM$i"
-}
-$nicvm = Get-AzNetworkInterface @nic
-
-## Apply the backend to the network interface ##
-$be = @{
- Name = 'ipconfig1'
- LoadBalancerBackendAddressPoolId = $lb.BackendAddressPools[0].id,$lb.BackendAddressPools[1].id
-}
-$nicvm | Set-AzNetworkInterfaceIpConfig @be | Set-AzNetworkInterface
-}
-
-```
-
-# [**Basic SKU**](#tab/option-1-create-load-balancer-basic)
-
->[!NOTE]
->Standard SKU load balancer is recommended for production workloads. For more information about skus, see **[Azure Load Balancer SKUs](skus.md)**.
--
-## Create a public IP address - Basic
-
-Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a public IP address.
-
-```azurepowershell-interactive
-$publicip = @{
- Name = 'myPublicIP'
- ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
- Sku = 'Basic'
- AllocationMethod = 'static'
+ PublicIpAddress = $publicIP
}
-New-AzPublicIpAddress @publicip
-
-```
+$natGateway = New-AzNatGateway @nat
-## Create basic load balancer
-
-This section details how you can create and configure the following components of the load balancer:
-
-* Create a front-end IP with [New-AzLoadBalancerFrontendIpConfig](/powershell/module/az.network/new-azloadbalancerfrontendipconfig) for the frontend IP pool. This IP receives the incoming traffic on the load balancer
-
-* Create a back-end address pool with [New-AzLoadBalancerBackendAddressPoolConfig](/powershell/module/az.network/new-azloadbalancerbackendaddresspoolconfig) for traffic sent from the frontend of the load balancer. This pool is where your backend virtual machines are deployed.
-
-* Create a health probe with [Add-AzLoadBalancerProbeConfig](/powershell/module/az.network/add-azloadbalancerprobeconfig) that determines the health of the backend VM instances.
-
-* Create a load balancer rule with [Add-AzLoadBalancerRuleConfig](/powershell/module/az.network/add-azloadbalancerruleconfig) that defines how traffic is distributed to the VMs.
-
-* Create a public load balancer with [New-AzLoadBalancer](/powershell/module/az.network/new-azloadbalancer).
-
-```azurepowershell-interactive
-## Place public IP created in previous steps into variable. ##
-$publicIp = Get-AzPublicIpAddress -Name 'myPublicIP' -ResourceGroupName 'CreatePubLBQS-rg'
-
-## Create load balancer frontend configuration and place in variable. ##
-$feip = New-AzLoadBalancerFrontendIpConfig -Name 'myFrontEnd' -PublicIpAddress $publicIp
-
-## Create backend address pool configuration and place in variable. ##
-$bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'myBackEndPool'
-
-## Create the health probe and place in variable. ##
-$probe = @{
- Name = 'myHealthProbe'
- Protocol = 'tcp'
- Port = '80'
- IntervalInSeconds = '360'
- ProbeCount = '5'
-}
-$healthprobe = New-AzLoadBalancerProbeConfig @probe
-
-## Create the load balancer rule and place in variable. ##
-$lbrule = @{
- Name = 'myHTTPRule'
- Protocol = 'tcp'
- FrontendPort = '80'
- BackendPort = '80'
- IdleTimeoutInMinutes = '15'
- FrontendIpConfiguration = $feip
- BackendAddressPool = $bePool
-}
-$rule = New-AzLoadBalancerRuleConfig @lbrule
-
-## Create the load balancer resource. ##
-$loadbalancer = @{
- ResourceGroupName = 'CreatePubLBQS-rg'
- Name = 'myLoadBalancer'
- Location = 'eastus'
- Sku = 'Basic'
- FrontendIpConfiguration = $feip
- BackendAddressPool = $bePool
- LoadBalancingRule = $rule
- Probe = $healthprobe
-}
-New-AzLoadBalancer @loadbalancer
-
-```
-
-## Configure virtual network - Basic
-
-Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
-
-Create a virtual network for the backend virtual machines.
-
-Create a network security group to define inbound connections to your virtual network.
-
-### Create virtual network, network security group, and bastion host
-
-* Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork).
-
-* Create a network security group rule with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig).
-
-* Create an Azure Bastion host with [New-AzBastion](/powershell/module/az.network/new-azbastion).
-
-* Create a network security group with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup).
-
-```azurepowershell-interactive
## Create backend subnet config ## $subnet = @{ Name = 'myBackendSubnet' AddressPrefix = '10.1.0.0/24'
+ NatGateway = $natGateway
} $subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet
$nsg = @{
SecurityRules = $rule1 } New-AzNetworkSecurityGroup @nsg- ```
-## Create virtual machines - Basic
-
-In this section, you'll create the virtual machines for the backend pool of the load balancer.
+## Create virtual machines
-* Create three network interfaces with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface).
+In this section, you'll create the two virtual machines for the backend pool of the load balancer.
-* Set an administrator username and password for the VMs with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential).
+* Create two network interfaces with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface)
-* Use [New-AzAvailabilitySet](/powershell/module/az.compute/new-azvm) to create an availability set for the virtual machines.
+* Set an administrator username and password for the VMs with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential)
* Create the virtual machines with:
+
* [New-AzVM](/powershell/module/az.compute/new-azvm)
+
* [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
+
* [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
+
* [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
+
* [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface) ```azurepowershell-interactive
In this section, you'll create the virtual machines for the backend pool of the
$cred = Get-Credential ## Place the virtual network into a variable. ##
-$vnet = Get-AzVirtualNetwork -Name 'myVNet' -ResourceGroupName 'CreatePubLBQS-rg'
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+}
+$vnet = Get-AzVirtualNetwork @net
## Place the load balancer into a variable. ## $lb = @{
$lb = @{
$bepool = Get-AzLoadBalancer @lb | Get-AzLoadBalancerBackendAddressPoolConfig ## Place the network security group into a variable. ##
-$nsg = Get-AzNetworkSecurityGroup -Name 'myNSG' -ResourceGroupName 'CreatePubLBQS-rg'
-
-## Create availability set for the virtual machines. ##
-$set = @{
- Name = 'myAvSet'
+$ns = @{
+ Name = 'myNSG'
ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
- Sku = 'Aligned'
- PlatformFaultDomainCount = '2'
- PlatformUpdateDomainCount = '2'
}
-$avs = New-AzAvailabilitySet @set
+$nsg = Get-AzNetworkSecurityGroup @ns
-## For loop with variable to create virtual machines. ##
-for ($i=1; $i -le 3; $i++)
+## For loop with variable to create virtual machines for load balancer backend pool. ##
+for ($i=1; $i -le 2; $i++)
{
-## Command to create network interface for VMs ##
-$nic = @{
- Name = "myNicVM$i"
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
- Subnet = $vnet.Subnets[0]
- NetworkSecurityGroup = $nsg
- LoadBalancerBackendAddressPool = $bepool
-}
-$nicVM = New-AzNetworkInterface @nic
-
-## Create a virtual machine configuration for VMs ##
-$vmsz = @{
- VMName = "myVM$i"
- VMSize = 'Standard_DS1_v2'
- AvailabilitySetId = $avs.Id
-}
-$vmos = @{
- ComputerName = "myVM$i"
- Credential = $cred
-}
-$vmimage = @{
- PublisherName = 'MicrosoftWindowsServer'
- Offer = 'WindowsServer'
- Skus = '2019-Datacenter'
- Version = 'latest'
-}
-$vmConfig = New-AzVMConfig @vmsz `
- | Set-AzVMOperatingSystem @vmos -Windows `
- | Set-AzVMSourceImage @vmimage `
- | Add-AzVMNetworkInterface -Id $nicVM.Id
-
-## Create the virtual machine for VMs ##
-$vm = @{
- ResourceGroupName = 'CreatePubLBQS-rg'
- Location = 'eastus'
- VM = $vmConfig
-}
-New-AzVM @vm -AsJob
+ ## Command to create network interface for VMs ##
+ $nic = @{
+ Name = "myNicVM$i"
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Location = 'eastus'
+ Subnet = $vnet.Subnets[0]
+ NetworkSecurityGroup = $nsg
+ LoadBalancerBackendAddressPool = $bepool
+ }
+ $nicVM = New-AzNetworkInterface @nic
+
+ ## Create a virtual machine configuration for VMs ##
+ $vmsz = @{
+ VMName = "myVM$i"
+ VMSize = 'Standard_DS1_v2'
+ }
+ $vmos = @{
+ ComputerName = "myVM$i"
+ Credential = $cred
+ }
+ $vmimage = @{
+ PublisherName = 'MicrosoftWindowsServer'
+ Offer = 'WindowsServer'
+ Skus = '2019-Datacenter'
+ Version = 'latest'
+ }
+ $vmConfig = New-AzVMConfig @vmsz `
+ | Set-AzVMOperatingSystem @vmos -Windows `
+ | Set-AzVMSourceImage @vmimage `
+ | Add-AzVMNetworkInterface -Id $nicVM.Id
+
+ ## Create the virtual machine for VMs ##
+ $vm = @{
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Location = 'eastus'
+ VM = $vmConfig
+ Zone = "$i"
+ }
+ New-AzVM @vm -AsJob
}- ``` The deployments of the virtual machines and bastion host are submitted as PowerShell jobs. To view the status of the jobs, use [Get-Job](/powershell/module/microsoft.powershell.core/get-job):
Id Name PSJobTypeName State HasMoreData Location
1 Long Running O… AzureLongRunni… Completed True localhost New-AzBastion 2 Long Running O… AzureLongRunni… Completed True localhost New-AzVM 3 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
-4 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
```
+Ensure the **State** of the VM creation is **Completed** before moving on to the next steps.
- ## Install IIS
Use [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) to inst
The extension runs `PowerShell Add-WindowsFeature Web-Server` to install the IIS webserver and then updates the Default.htm page to show the hostname of the VM: > [!IMPORTANT]
-> Ensure the virtual machine deployments have completed from the previous steps before proceeding. Use `Get-Job` to check the status of the virtual machine deployment jobs.
+> Ensure the virtual machine deployments have completed from the previous steps before proceeding. Use `Get-Job` to check the status of the virtual machine deployment jobs.
```azurepowershell-interactive ## For loop with variable to install custom script extension on virtual machines. ##
-for ($i=1; $i -le 3; $i++)
+for ($i=1; $i -le 2; $i++)
{ $ext = @{ Publisher = 'Microsoft.Compute'
Set-AzVMExtension @ext -AsJob
The extensions are deployed as PowerShell jobs. To view the status of the installation jobs, use [Get-Job](/powershell/module/microsoft.powershell.core/get-job): - ```azurepowershell-interactive Get-Job
Id Name PSJobTypeName State HasMoreData Location
-- - - -- -- -- - 8 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension 9 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension
-10 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension
```
+Ensure the **State** of the jobs is **Completed** before moving on to the next steps.
+ ## Test the load balancer Use [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) to get the public IP address of the load balancer:
$ip = @{
Name = 'myPublicIP' } Get-AzPublicIPAddress @ip | select IpAddress- ``` Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS Web server is displayed on the browser.
- ![IIS Web server](./media/tutorial-load-balancer-standard-zonal-portal/load-balancer-test.png)
-
-To see the load balancer distribute traffic across all three VMs, you can customize the default page of each VM's IIS Web server and then force-refresh your web browser from the client machine.
+ :::image type="content" source="./media/quickstart-load-balancer-standard-public-portal/load-balancer-test.png" alt-text="Screenshot of the load balancer test web page.":::
## Clean up resources
When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/modu
```azurepowershell-interactive Remove-AzResourceGroup -Name 'CreatePubLBQS-rg'- ``` ## Next steps
-In this quickstart:
+In this quickstart, you:
+
+* Created an Azure Load Balancer
+
+* Attached 2 VMs to the load balancer
-* You created a standard or basic public load balancer
-* Attached virtual machines.
-* Configured the load balancer traffic rule and health probe.
-* Tested the load balancer.
+* Tested the load balancer
To learn more about Azure Load Balancer, continue to: > [!div class="nextstepaction"]
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-InternalBasic-To-PublicStandard.md
- Title: Upgrade from Basic Internal to Standard Public - Azure Load Balancer
-description: This article shows you how to upgrade Azure Basic Internal Load Balancer to Standard Public Load Balancer
---- Previously updated : 01/23/2020---
-# Upgrade Azure Internal Load Balancer - Outbound Connection Required
-[Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Load Balancer SKU, see [comparison table](./skus.md#skus). Since Standard Internal Load Balancer does not provide outbound connection, we provide a solution to create a Standard Public Load Balancer instead.
-
-There are four stages in a upgrade:
-
-1. Migrate the configuration to Standard Public Load Balancer
-2. Add VMs to backend pools of Standard Public Load Balancer
-3. Set up NSG rules for Subnet/VMs that should be refrained from/to the Internet
-
-This article covers configuration migration. Adding VMs to backend pools may vary depending on your specific environment. However, some high-level, general recommendations [are provided](#add-vms-to-backend-pools-of-standard-load-balancer).
-
-## Upgrade overview
-
-An Azure PowerShell script is available that does the following:
-
-* Creates a Standard SKU Public Load Balancer in the resource group and location that you specify.
-* Seamlessly copies the configurations of the Basic SKU Internal Load Balancer to the newly create Standard Public Load Balancer.
-* Creates an outbound rule which enables egress connectivity.
-
-### Caveats\Limitations
-
-* Script supports Internal Load Balancer upgrade where outbound connection is required. If outbound connection is not required for any of the VMs, refer to [this page](upgrade-basicInternal-standard.md) for best practice.
-* The Standard Load Balancer has a new public address. ItΓÇÖs impossible to move the IP addresses associated with existing Basic Internal Load Balancer seamlessly to Standard Public Load Balancer since they have different SKUs.
-* If the Standard load balancer is created in a different region, you wonΓÇÖt be able to associate the VMs existing in the old region to the newly created Standard Load Balancer. To work around this limitation, make sure to create a new VM in the new region.
-* If your Load Balancer does not have any frontend IP configuration or backend pool, you are likely to hit an error running the script. Make sure they are not empty.
-
-## Download the script
-
-Download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureLBUpgrade/2.0).
-## Use the script
-
-There are two options for you depending on your local PowerShell environment setup and preferences:
-
-* If you donΓÇÖt have the Azure Az modules installed, or donΓÇÖt mind uninstalling the Azure Az modules, the best option is to use the `Install-Script` option to run the script.
-* If you need to keep the Azure Az modules, your best bet is to download the script and run it directly.
-
-To determine if you have the Azure Az modules installed, run `Get-InstalledModule -Name az`. If you don't see any installed Az modules, then you can use the `Install-Script` method.
-
-### Install using the Install-Script method
-
-To use this option, you must not have the Azure Az modules installed on your computer. If they're installed, the following command displays an error. You can either uninstall the Azure Az modules, or use the other option to download the script manually and run it.
-
-Run the script with the following command:
-
-`Install-Script -Name AzureLBUpgrade`
-
-This command also installs the required Az modules.
-
-### Install using the script directly
-
-If you do have some Azure Az modules installed and can't uninstall them (or don't want to uninstall them), you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw nupkg file. To install the script from this nupkg file, see [Manual Package Download](/powershell/scripting/gallery/how-to/working-with-packages/manual-download).
-
-To run the script:
-
-1. Use `Connect-AzAccount` to connect to Azure.
-
-1. Use `Import-Module Az` to import the Az modules.
-
-1. Examine the required parameters:
-
- * **oldRgName: [String]: Required** ΓÇô This is the resource group for your existing Basic Load Balancer you want to upgrade. To find this string value, navigate to Azure portal, select your Basic Load Balancer source, and click the **Overview** for the load balancer. The Resource Group is located on that page.
- * **oldLBName: [String]: Required** ΓÇô This is the name of your existing Basic Balancer you want to upgrade.
- * **newrgName: [String]: Required** ΓÇô This is the resource group in which the Standard Load Balancer will be created. It can be a new resource group or an existing one. If you pick an existing resource group, note that the name of the Load Balancer has to be unique within the resource group.
- * **newlocation: [String]: Required** ΓÇô This is the location in which the Standard Load Balancer will be created. We recommend inheriting the same location of the chosen Basic Load Balancer to the Standard Load Balancer for better association with other existing resources.
- * **newLBName: [String]: Required** ΓÇô This is the name for the Standard Load Balancer to be created.
-1. Run the script using the appropriate parameters. It may take five to seven minutes to finish.
-
- **Example**
-
- ```azurepowershell
- AzureLBUpgrade.ps1 -oldRgName "test_publicUpgrade_rg" -oldLBName "LBForPublic" -newrgName "test_userInput3_rg" -newlocation "centralus" -newLbName "LBForUpgrade"
- ```
-
-### Add VMs to backend pools of Standard Load Balancer
-
-First, double check that the script successfully created a new Standard Public Load Balancer with the exact configuration migrated over from your Basic Internal Load Balancer. You can verify this from the Azure portal.
-
-Be sure to send a small amount of traffic through the Standard Load Balancer as a manual test.
-
-Here are a few scenarios of how you add VMs to backend pools of the newly created Standard Public Load Balancer may be configured, and our recommendations for each one:
-
-* **Moving existing VMs from backend pools of old Basic Internal Load Balancer to backend pools of newly created Standard Public Load Balancer**.
- 1. To do the tasks in this quickstart, sign in to the [Azure portal](https://portal.azure.com).
-
- 1. Select **All resources** on the left menu, and then select the **newly created Standard Load Balancer** from the resource list.
-
- 1. Under **Settings**, select **Backend pools**.
-
- 1. Select the backend pool which matches the backend pool of the Basic Load Balancer, select the following value:
- - **Virtual Machine**: Drop down and select the VMs from the matching backend pool of the Basic Load Balancer.
- 1. Select **Save**.
- >[!NOTE]
- >For VMs which have Public IPs, you will need to create Standard IP addresses first where same IP address is not guaranteed. Disassociate VMs from Basic IPs and associate them with the newly created Standard IP addresses. Then, you will be able to follow instructions to add VMs into backend pool of Standard Load Balancer.
-
-* **Creating new VMs to add to the backend pools of the newly created Standard Public Load Balancer**.
- * More instructions on how to create VM and associate it with Standard Load Balancer can be found [here](./quickstart-load-balancer-standard-public-portal.md#create-virtual-machines).
-
-### Create an outbound rule for outbound connection
-
-Follow the [instructions](./quickstart-load-balancer-standard-public-powershell.md#create-outbound-rule-configuration) to create an outbound rule so you can
-* Define outbound NAT from scratch.
-* Scale and tune the behavior of existing outbound NAT.
-
-### Create NSG rules for VMs which to refrain communication from or to the Internet
-If you would like to refrain Internet traffic from reaching to your VMs, you can create an [NSG rule](../virtual-network/manage-network-security-group.md) on the Network Interface of the VMs.
-
-## Common questions
-
-### Are there any limitations with the Azure PowerShell script to migrate the configuration from v1 to v2?
-
-Yes. See [Caveats/Limitations](#caveatslimitations).
-
-### Does the Azure PowerShell script also switch over the traffic from my Basic Load Balancer to the newly created Standard Load Balancer?
-
-No. The Azure PowerShell script only migrates the configuration. Actual traffic migration is your responsibility and in your control.
-
-## Next steps
-
-[Learn about Standard Load Balancer](load-balancer-overview.md)
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard.md
Title: Upgrade from Basic Public to Standard Public - Azure Load Balancer
-description: This article shows you how to upgrade Azure Public Load Balancer from Basic SKU to Standard SKU
+ Title: Upgrade a basic to standard public load balancer
+
+description: This article shows you how to upgrade a public load balancer from basic to standard SKU.
-+ Previously updated : 01/23/2020- Last updated : 03/17/2022+
+# Upgrade from a basic public to standard public load balancer
-# Upgrade Azure Public Load Balancer
-[Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Load Balancer SKU, see [comparison table](./skus.md#skus).
+[Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Azure Load Balancer SKUs, see [comparison table](./skus.md#skus).
There are two stages in an upgrade:
-1. Change IP allocation method from Dynamic to Static.
+1. Change IP allocation method from **Dynamic** to **Static**.
+ 2. Run the PowerShell script to complete the upgrade and traffic migration. ## Upgrade overview
-An Azure PowerShell script is available that does the following:
+An Azure PowerShell script is available that does the following procedures:
+
+* Creates a standard load balancer with a location you specify in the same resource group of the basic load balancer
+
+* Upgrades the public IP address from basic SKU to standard SKU in-place
+
+* Copies the configurations of the basic load balancer to the newly standard load balancer
-* Creates a Standard SKU Load Balancer with location you specify in the same resource group of the Basic Standard Load Balancer.
-* Upgrades Public IP address from Basic SKU to Standard SKU in-place.
-* Seamlessly copies the configurations of the Basic SKU Load Balancer to the newly create Standard Load Balancer.
-* Creates a default outbound rule which enables outbound connectivity.
+* Creates a default outbound rule that enables outbound connectivity
-### Caveats/Limitations
+### Constraints
-* Script only supports Public Load Balancer upgrade. For Internal Basic Load Balancer upgrade, refer to [this page](./upgrade-basicinternal-standard.md) for instructions.
-* The allocation method of the Public IP Address has to be changed to "static" before running the script.
-* If your Load Balancer does not have any frontend IP configuration or backend pool, you are likely to hit an error running the script. Please make sure they are not empty.
+* The script only supports a public load balancer upgrade. For an internal basic load balancer upgrade, see [Upgrade from basic internal to standard internal - Azure Load Balancer](./upgrade-basicinternal-standard.md) for instructions and more information
-### Change Allocation method of the Public IP Address to Static
+* The allocation method of the public IP Address must be changed to **static** before running the script
-* **Here are our recommended steps:**
+* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
- 1. To do the tasks in this quickstart, sign in to the [Azure portal](https://portal.azure.com).
+### Change allocation method of the public IP address to static
+
+The following are the recommended steps to change the allocation method.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
- 1. Select **All resources** on the left menu, and then select the **Basic Public IP Address associated with Basic Load Balancer** from the resource list.
+2. Select **All resources** in. the left menu. Select the **basic public IP address associated with the basic load balancer** from the resource list.
- 1. Under **Settings**, select **Configurations**.
+3. In the **Settings** of the basic public IP address, select **Configurations**.
- 1. Under **Assignment**, select **Static**.
- 1. Select **Save**.
- >[!NOTE]
- >For VMs which have Public IPs, you will need to create Standard IP addresses first where same IP address is not guaranteed. Disassociate VMs from Basic IPs and associate them with the newly created Standard IP addresses. Then, you will be able to follow instructions to add VMs into backend pool of Standard Load Balancer.
+4. In **Assignment**, select **Static**.
+
+5. Select **Save**.
+
+>[!NOTE]
+>For virtual machines which have public IPs, you must create standard IP addresses first. The same IP address is not guaranteed. Disassociate the VMs from the basic IPs and associate them with the newly created standard IP addresses. You'll then be able to follow the instructions to add VMs into the backend pool of the Standard Azure Load Balancer.
-* **Creating new VMs to add to the backend pools of the newly created Standard Public Load Balancer**.
- * More instructions on how to create VM and associate it with Standard Load Balancer can be found [here](./quickstart-load-balancer-standard-public-portal.md#create-virtual-machines).
+### Create new VMs to add to the backend pool of the new standard load balancer
+* To create a virtual machine and associate it with the load balancer, see [Create virtual machines](./quickstart-load-balancer-standard-public-portal.md#create-virtual-machines).
## Download the script
-Download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzurePublicLBUpgrade/6.0).
+Download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzurePublicLBUpgrade/6.0).
+ ## Use the script
-There are two options for you depending on your local PowerShell environment setup and preferences:
+There are two options depending on your local PowerShell environment setup and preferences:
-* If you donΓÇÖt have the Azure Az modules installed, or donΓÇÖt mind uninstalling the Azure Az modules, the best option is to use the `Install-Script` option to run the script.
-* If you need to keep the Azure Az modules, your best bet is to download the script and run it directly.
+* If you donΓÇÖt have the Azure Az modules installed, or donΓÇÖt mind uninstalling the Azure Az modules, use the `Install-Script` option to run the script.
+
+* If you need to keep the Azure Az modules, download the script and run it directly.
To determine if you have the Azure Az modules installed, run `Get-InstalledModule -Name az`. If you don't see any installed Az modules, then you can use the `Install-Script` method.
-### Install using the Install-Script method
+### Install with Install-Script
-To use this option, you must not have the Azure Az modules installed on your computer. If they're installed, the following command displays an error. You can either uninstall the Azure Az modules, or use the other option to download the script manually and run it.
+To use this option, don't have the Azure Az modules installed on your computer. If they're installed, the following command displays an error. Uninstall the Azure Az modules, or use the other option to download the script manually and run it.
Run the script with the following command:
-`Install-Script -Name AzurePublicLBUpgrade`
-
+```azurepowershell
+Install-Script -Name AzurePublicLBUpgrade
+```
This command also installs the required Az modules.
-### Install using the script directly
+### Install with the script directly
-If you do have some Azure Az modules installed and can't uninstall them (or don't want to uninstall them), you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw nupkg file. To install the script from this nupkg file, see [Manual Package Download](/powershell/scripting/gallery/how-to/working-with-packages/manual-download).
+If you do have Azure Az modules installed and can't uninstall them, or don't want to uninstall them,you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw **nupkg** file. To install the script from this **nupkg** file, see [Manual Package Download](/powershell/scripting/gallery/how-to/working-with-packages/manual-download).
To run the script: 1. Use `Connect-AzAccount` to connect to Azure.
-1. Use `Import-Module Az` to import the Az modules.
+2. Use `Import-Module Az` to import the Az modules.
+
+3. Examine the required parameters:
-1. Examine the required parameters:
+ * **oldRgName: [String]: Required** ΓÇô This parameter is the resource group for your existing basic load balancer you want to upgrade. To find this string value, navigate to the Azure portal, select your basic load balancer source, and select the **Overview** for the load balancer. The resource group is located on that page
+
+ * **oldLBName: [String]: Required** ΓÇô This parameter is the name of your existing the basic load balancer you want to upgrade.
+
+ * **newLBName: [String]: Required** ΓÇô This parameter is the name for the standard load balancer to be created
- * **oldRgName: [String]: Required** ΓÇô This is the resource group for your existing Basic Load Balancer you want to upgrade. To find this string value, navigate to Azure portal, select your Basic Load Balancer source, and click the **Overview** for the load balancer. The Resource Group is located on that page.
- * **oldLBName: [String]: Required** ΓÇô This is the name of your existing Basic Balancer you want to upgrade.
- * **newLBName: [String]: Required** ΓÇô This is the name for the Standard Load Balancer to be created.
-1. Run the script using the appropriate parameters. It may take five to seven minutes to finish.
+4. Run the script using the appropriate parameters. It may take five to seven minutes to finish.
**Example**
To run the script:
AzurePublicLBUpgrade.ps1 -oldRgName "test_publicUpgrade_rg" -oldLBName "LBForPublic" -newLbName "LBForUpgrade" ```
-### Create an outbound rule for outbound connection
+### Create a NAT gateway for outbound access
+
+The script creates an outbound rule that enables outbound connectivity. Azure Virtual Network NAT is the recommended service for outbound connectivity. For more information about Azure Virtual Network NAT, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
-Follow the [instructions](./quickstart-load-balancer-standard-public-powershell.md#create-outbound-rule-configuration) to create an outbound rule so you can
-* Define outbound NAT from scratch.
-* Scale and tune the behavior of existing outbound NAT.
+To create a NAT gateway resource and associate it with a subnet of your virtual network see, [Create NAT gateway](quickstart-load-balancer-standard-public-portal.md#create-nat-gateway).
## Common questions ### Are there any limitations with the Azure PowerShell script to migrate the configuration from v1 to v2?
-Yes. See [Caveats/Limitations](#caveatslimitations).
+Yes. See [Constraints](#constraints).
### How long does the upgrade take?
-It usually take about a few minutes for the script to finish and it could take longer depending on the complexity of your Load Balancer configuration. Therefore, keep the downtime in mind and plan for failover if necessary.
+It usually takes a few minutes for the script to finish and it could take longer depending on the complexity of your load balancer configuration. Keep the downtime in mind and plan for failover if necessary.
-### Does the Azure PowerShell script also switch over the traffic from my Basic Load Balancer to the newly created Standard Load Balancer?
+### Does the script switch over the traffic from my basic load balancer to the newly created standard load balancer?
-Yes. The Azure PowerShell script not only upgrades the Public IP address, copies the configuration from Basic to Standard Load Balancer, but also migrates VM to behind the newly created Standard Public Load Balancer as well.
+Yes. The Azure PowerShell script upgrades the public IP address, copies the configuration from the basic to standard load balancer, and migrates the virtual machine to the newly created public standard load balancer.
## Next steps
-[Learn about Standard Load Balancer](load-balancer-overview.md)
+[Learn about Azure Load Balancer](load-balancer-overview.md)
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-internalbasic-to-publicstandard.md
+
+ Title: Upgrade an internal basic load balancer - Outbound connections required
+
+description: Learn how to upgrade a basic internal load balancer to a standard public load balancer.
+++ Last updated : 03/17/2022+++
+# Upgrade an internal basic load balancer - Outbound connections required
+
+A standard [Azure Load Balancer](load-balancer-overview.md) offers increased functionality and high availability through zone redundancy. For more information about Azure Load Balancer SKUs, see [Azure Load Balancer SKUs](./skus.md#skus). A standard internal Azure Load Balancer doesn't provide outbound connectivity. The PowerShell script in this article, migrates the basic load balancer configuration to a standard public load balancer.
+
+There are four stages in the upgrade:
+
+1. Migrate the configuration to a standard public load balancer
+
+2. Add virtual machines to the backend pools of the standard public load balancer
+
+3. Create Network Security Group (NSG) rules for subnets and virtual machines that require internet connection restrictions
+
+This article covers a configuration migration. Adding the VMs to the backend pool may vary depending on your specific environment. See [Add VMs to the backend pools](#add-vms-to-the-backend-pool-of-the-standard-load-balancer) later in this article for recommendations.
+
+## Upgrade overview
+
+An Azure PowerShell script is available that does the following procedures:
+
+* Creates a standard public load balancer in the resource group and location that you specify
+
+* Copies the configurations of the basic internal load balancer to the newly created standard public load balancer.
+
+* Creates an outbound rule that enables outbound connectivity
+
+### Constraints
+
+* The script supports an internal load balancer upgrade where outbound connectivity is required. If outbound connectivity isn't required, see [Upgrade an internal basic load balancer - Outbound connections not required](upgrade-basicinternal-standard.md).
+
+* The standard load balancer has a new public address. ItΓÇÖs impossible to move the IP addresses associated with existing basic internal load balancer to a standard public load balancer because of different SKUs.
+
+* If the standard load balancer is created in a different region, you wonΓÇÖt be able to associate the VMs in the old region. To avoid this constraint, ensure you create new VMs in the new region.
+
+* If the load balancer doesn't have a frontend IP configuration or backend pool, you'll encounter an error running the script. Ensure the load balancer has a frontend IP and backend pool
+
+## Download the script
+
+Download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureLBUpgrade/2.0).
+
+## Use the script
+
+There are two options depending on your local PowerShell environment setup and preferences:
+
+* If you donΓÇÖt have the Azure Az modules installed, or donΓÇÖt mind uninstalling the Azure Az modules, use the `Install-Script` option to run the script.
+
+* If you need to keep the Azure Az modules, download the script and run it directly.
+
+To determine if you have the Azure Az modules installed, run `Get-InstalledModule -Name az`. If you don't see any installed Az modules, then you can use the `Install-Script` method.
+
+### Install with Install-Script
+
+To use this option, don't have the Azure Az modules installed on your computer. If they're installed, the following command displays an error. Uninstall the Azure Az modules, or use the other option to download the script manually and run it.
+
+Run the script with the following command:
+
+```azurepowershell
+Install-Script -Name AzureLBUpgrade
+```
+This command also installs the required Az modules.
+
+### Install with the script directly
+
+If you do have Azure Az modules installed and can't uninstall them, or don't want to uninstall them,you can manually download the script using the **Manual Download** tab in the script download link. The script is downloaded as a raw **nupkg** file. To install the script from this **nupkg** file, see [Manual Package Download](/powershell/scripting/gallery/how-to/working-with-packages/manual-download).
+
+To run the script:
+
+1. Use `Connect-AzAccount` to connect to Azure.
+
+2. Use `Import-Module Az` to import the Az modules.
+
+3. Examine the required parameters:
+
+ * **oldRgName: [String]: Required** ΓÇô This parameter is the resource group for your existing basic load balancer you want to upgrade. To find this string value, navigate to the Azure portal, select your basic load balancer source, and select the **Overview** for the load balancer. The resource group is located on that page
+
+ * **oldLBName: [String]: Required** ΓÇô This parameter is the name of your existing the basic load balancer you want to upgrade
+
+ * **newRgName: [String]: Required** ΓÇô This parameter is the resource group where the standard load balancer is created. The resource group can be new or existing. If you choose an existing resource group, the name of the load balancer must be unique within the resource group.
+
+ * **newLocation: [String]: Required** ΓÇô This parameter is the location where the standard load balancer is created. We recommend you choose the same location as the basic load balancer to ensure association of existing resources
+
+ * **newLBName: [String]: Required** ΓÇô This parameter is the name for the standard load balancer to be created
+
+4. Run the script using the appropriate parameters. It may take five to seven minutes to finish.
+
+ **Example**
+
+ ```azurepowershell
+ AzureLBUpgrade.ps1 -oldRgName "test_publicUpgrade_rg" -oldLBName "LBForPublic" -newRgName "test_userInput3_rg" -newLocation "centralus" -newLbName "LBForUpgrade"
+ ```
+
+### Add VMs to the backend pool of the standard load balancer
+
+Ensure that the script successfully created a new standard public load balancer with the exact configuration from your basic internal load balancer. You can verify the configuration from the Azure portal.
+
+Send a small amount of traffic through the standard load balancer as a manual test.
+
+The following scenarios explain how you add VMs to the backend pools of the newly created standard public load balancer, and our recommendations for each scenario:
+
+* **Move existing VMs from the backend pools of the old basic internal load balancer to the backend pools of the new standard public load balancer**
+
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
+
+ 2. Select **All resources** in the left menu. Select the **new standard load balancer** from the resource list.
+
+ 3. In the **Settings** in the load balancer page, select **Backend pools**.
+
+ 4. Select the backend pool that matches the backend pool of the basic load balancer.
+
+ 5. Select **Virtual Machine**
+
+ 6. Select the VMs from the matching backend pool of the basic load balancer.
+
+ 7. Select **Save**.
+
+ >[!NOTE]
+ >For virtual machines which have public IPs, you must create standard IP addresses first. The same IP address is not guaranteed. Disassociate the VMs from the basic IPs and associate them with the newly created standard IP addresses. You'll then be able to follow the instructions to add VMs into the backend pool of the Standard Azure Load Balancer.
+
+* **Create new VMs to add to the backend pools of the new standard public load balancer**.
+
+ * To create a virtual machine and associate it with the load balancer, see [Create virtual machines](./quickstart-load-balancer-standard-public-portal.md#create-virtual-machines).
+
+### Create a NAT gateway for outbound access
+
+The script creates an outbound rule that enables outbound connectivity. Azure Virtual Network NAT is the recommended service for outbound connectivity. For more information about Azure Virtual Network NAT, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
+
+To create a NAT gateway resource and associate it with a subnet of your virtual network see, [Create NAT gateway](quickstart-load-balancer-standard-public-portal.md#create-nat-gateway).
+
+### Create NSG rules for subnets and virtual machines that require internet connection restrictions
+
+For more information about creating Network Security Groups and restricting internet traffic, see [Create, change, or delete an Azure network security group](../virtual-network/manage-network-security-group.md).
+
+## Common questions
+
+### Are there any limitations with the Azure PowerShell script to migrate the configuration from v1 to v2?
+
+Yes. See [Constraints](#constraints).
+
+### Does the Azure PowerShell script switch over the traffic from my basic load Balancer to the new standard load balancer?
+
+No. The Azure PowerShell script only migrates the configuration. Actual traffic migration is your responsibility and in your control.
+
+## Next steps
+
+[Learn about Azure Load Balancer](load-balancer-overview.md)
logic-apps Create Serverless Apps Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-serverless-apps-visual-studio.md
For more information, review the following articles:
* Download and install the following tools, if you don't already have them:
- * [Visual Studio 2019, 2017, or 2015 (Community or other edition)](https://aka.ms/download-visual-studio). This quickstart uses Visual Studio Community 2019, which is free.
+ * [Visual Studio 2019, 2017, or 2015 (Community or other edition)](https://aka.ms/download-visual-studio). The Azure Logic Apps extension is currently unavailable for Visual Studio 2022. This quickstart uses Visual Studio Community 2019, which is free.
> [!IMPORTANT] > When you install Visual Studio 2019 or 2017, make sure to select the **Azure development** workload.
For more information, review the following articles:
* [Azure PowerShell](https://github.com/Azure/azure-powershell#installation).
- * Azure Logic Apps Tools for the Visual Studio version that you're using. You can either [learn how install this extension from inside Visual Studio](/visualstudio/ide/finding-and-using-visual-studio-extensions), or you can download the respective versions of the Azure Logic Apps Tools from the Visual Studio Marketplace:
+ * The latest Azure Logic Apps Tools extension for the Visual Studio version that you want. You can either [learn how install this extension from inside Visual Studio](/visualstudio/ide/finding-and-using-visual-studio-extensions), or you can download the respective versions of the Azure Logic Apps Tools from the Visual Studio Marketplace:
* [Visual Studio 2019](https://aka.ms/download-azure-logic-apps-tools-visual-studio-2019)+ * [Visual Studio 2017](https://aka.ms/download-azure-logic-apps-tools-visual-studio-2017)+ * [Visual Studio 2015](https://aka.ms/download-azure-logic-apps-tools-visual-studio-2015) > [!IMPORTANT]
If you have logic app resources already deployed in Azure, you can edit, manage,
## Next steps
-* For another example using Azure Logic Apps and Azure Functions, try [Tutorial: Automate tasks to process emails by using Azure Logic Apps, Azure Functions, and Azure Storage](tutorial-process-email-attachments-workflow.md)
+* For another example using Azure Logic Apps and Azure Functions, try [Tutorial: Automate tasks to process emails by using Azure Logic Apps, Azure Functions, and Azure Storage](tutorial-process-email-attachments-workflow.md)
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
In this article, learn about Azure Data Science Virtual Machine releases. For a
Due to the rapidly evolving needs and packages updates, we target to release new Azure Data Science Virtual Machine for Windows and Ubuntu images every month.
-Azure portal users will always find the latest image available for provisioning the Data Science Virtual Machine. For CLI or ARM users, we keep images of individual versions available for twelve months. After that period, particular version of image is no longer available for provisioning.
+Azure portal users will always find the latest image available for provisioning the Data Science Virtual Machine. For CLI or Azure Resource Manager (ARM) users, we keep images of individual versions available for 12 months. After that period, particular version of image is no longer available for provisioning.
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## March 18, 2022
+[Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
+
+Version: 22.03.09
+
+Main changes:
+
+- Updated R environment - added libraries: Cluster, Devtools Factoextra, GlueHere, Ottr, Paletteer, Patchwork, Plotly, Rmd2jupyter, Scales, Statip, Summarytools, Tidyverse, Tidymodels and Testthat
+- Further `Log4j` vulnerability mitigation - although not used, we moved all `log4j` to version v2, we have removed old log4j jars1.0 and moved `log4j` version 2.0 jars.
+- Azure CLI to version 2.33.1
+- Redesign of Conda environments - we're continuing with alignment and refining the Conda environments so we created:
+ - `azureml_py38`: environment based on Python 3.8 with preinstalled [AzureML SDK](/python/api/overview/azure/ml/?view=azure-ml-py) containing also [AutoML](/azure/machine-learning/concept-automated-ml) environment
+ - `azureml_py38_PT_TF`: complementary environment `azureml_py38` with preinstalled with latest TensorFlow and PyTorch
+ - `py38_default`: default system environment based on Python 3.8
+ - we removed `azureml_py36_tensorflow`, `azureml_py36_pytorch`, `py38_tensorflow` and `py38_pytorch` environments.
++++ ## March 9, 2022 [Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
Version: 21.12.03
Windows 2019 DSVM will now be supported under publisher: microsoft-dsvm, offer ID: dsvm-win-2019, plan ID/SKU ID: winserver-2019
-Users using ARM template / VMSS to deploy the Windows DSVM machines, should configure the SKU with winserver-2019 instead of server-2019, since we will continue to ship updates to Windows DSVM images on the new SKU from March, 2022.
+Users using Azure Resource Manager (ARM) template / virtual machine scale set (VMSS) to deploy the Windows DSVM machines, should configure the SKU with `winserver-2019` instead of `server-2019`, since we'll continue to ship updates to Windows DSVM images on the new SKU from March, 2022.
## December 3, 2021
Version: 21.06.22
Main changes: - Updated to PyTorch 1.9.0-- Fixed a bug where git was not available
+- Fixed a bug where git wasn't available
## June 1, 2021
Selected version updates are:
- CUDA 11.3, cuDNN 8, NCCL2 - Python 3.8 - R 4.0.5-- Spark 3.1 incl. mmlspark, connectors to Blob Storage, Data Lake, CosmosDB
+- Spark 3.1 incl. mmlspark, connectors to Blob Storage, Data Lake, Cosmos DB
- Java 11 (OpenJDK) - Jupyter Lab 3.0.14 - PyTorch 1.8.1 incl. torchaudio torchtext torchvision, torch-tb-profiler
Selected version updates are:
- Microsoft Edge browser (beta) <br/>
-Added docker. To save resources, the docker service is not started by default. To start the docker service, run the
+Added docker. To save resources, the docker service isn't started by default. To start the docker service, run the
following command-line commands: ```
machine-learning How To Create Text Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-text-labeling-projects.md
Previously updated : 10/21/2021 Last updated : 03/18/2022
-# Create a text labeling project and export labels (preview)
+# Create a text labeling project and export labels
Learn how to create and run data labeling projects to label text data in Azure Machine Learning. Specify either a single label or multiple labels to be applied to each text item. You can also use the data labeling tool to [create an image labeling project](how-to-create-image-labeling-projects.md).
-> [!IMPORTANT]
-> Text labeling is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Text labeling capabilities Azure Machine Learning data labeling is a central place to create, manage, and monitor data labeling projects: -- Coordinate data, labels, and team members to efficiently manage labeling tasks.
+- Coordinate data, labels, and team members to efficiently manage labeling tasks.
- Tracks progress and maintains the queue of incomplete labeling tasks. - Start and stop the project and control the labeling progress. - Review the labeled data and export labeled as an Azure Machine Learning dataset.
Data formats available for text data:
[!INCLUDE [start](../../includes/machine-learning-data-labeling-start.md)]
-1. To create a project, select **Add project**. Give the project an appropriate name. The project name cannot be reused, even if the project is deleted in future.
+1. To create a project, select **Add project**. Give the project an appropriate name. The project name canΓÇÖt be reused, even if the project is deleted in future.
1. Select **Text** to create a text labeling project.
- :::image type="content" source="media/how-to-create-labeling-projects/text-labeling-creation-wizard.png" alt-text="Labeling project creation for text labeling":::
+ :::image type="content" source="media/how-to-create-text-labeling-projects/text-labeling-creation-wizard.png" alt-text="Labeling project creation for text labeling":::
+
+ * Choose **Text Classification Multi-class** for projects when you want to apply only a *single label* from a set of labels to each piece of text.
+ * Choose **Text Classification Multi-label** for projects when you want to apply *one or more* labels from a set of labels to each piece of text.
+ * Choose **Text Named Entity Recognition (Preview)** for projects when you want to apply labels to individual or multiple words of text in each entry.
- * Choose **Text Classification Multi-class (Preview)** for projects when you want to apply only a *single label* from a set of labels to each piece of text.
- * Choose **Text Classification Multi-label (Preview)** for projects when you want to apply *one or more* labels from a set of labels to each piece of text.
+ > [!IMPORTANT]
+ > Text Named Entity Recognition is currently in public preview.
+ > The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+ > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
1. Select **Next** when you're ready to continue.
Data formats available for text data:
[!INCLUDE [outsource](../../includes/machine-learning-data-labeling-outsource.md)]
-## Specify the data to label
+## Select or create a dataset
If you already created a dataset that contains your data, select it from the **Select an existing dataset** drop-down list. Or, select **Create a dataset** to use an existing Azure datastore or to upload local files.
To create a dataset from data that you've already stored in Azure Blob storage:
1. Select **Create a dataset** > **From datastore**. 1. Assign a **Name** to your dataset. 1. Choose the **Dataset type**:
- * Select **Tabular** if you're using a .csv or .tsv file, where each row contains a response.
+ * Select **Tabular** if you're using a .csv or .tsv file, where each row contains a response. Tabular isn't available for Text Named Entity Recognition projects.
* Select **File** if you're using separate .txt files for each response. 1. (Optional) Provide a description for your dataset. 1. Select **Next**.
To directly upload your data:
1. Select **Create a dataset** > **From local files**. 1. Assign a **Name** to your dataset. 1. Choose the **Dataset type**.
- * Select **Tabular** if you're using a .csv or .tsv file, where each row is a response.
+ * Select **Tabular** if you're using a .csv or .tsv file, where each row is a response. Tabular isn't available for Text Named Entity Recognition projects.
* Select **File** if you're using separate .txt files for each response. 1. (Optional) Provide a description of your dataset. 1. Select **Next**
To directly upload your data:
## Use ML-assisted data labeling The **ML-assisted labeling** page lets you trigger automatic machine learning models to accelerate labeling tasks. ML-assisted labeling is available for both file (.txt) and tabular (.csv) text data inputs.- To use **ML-assisted labeling**: * Select **Enable ML assisted labeling**.
At the beginning of your labeling project, the items are shuffled into a random
For training the text DNN model used by ML-assist, the input text per training example will be limited to approximately the first 128 words in the document. For tabular input, all text columns are first concatenated before applying this limit. This is a practical limit imposed to allow for the model training to complete in a timely manner. The actual text in a document (for file input) or set of text columns (for tabular input) can exceed 128 words. The limit only pertains to what is internally leveraged by the model during the training process.
-The exact number of labeled items necessary to start assisted labeling is not a fixed number. This can vary significantly from one labeling project to another, depending on many factors, including the number of labels classes and label distribution.
+The exact number of labeled items necessary to start assisted labeling isn't a fixed number. This can vary significantly from one labeling project to another, depending on many factors, including the number of labels classes and label distribution.
Since the final labels still rely on input from the labeler, this technology is sometimes called *human in the loop* labeling.
Once a machine learning model has been trained on your manually labeled data, th
### Dashboard - The **Dashboard** tab shows the progress of the labeling task.
-The progress chart shows how many items have been labeled, skipped, in need of review, or not yet done. Hover over the chart to see the number of item in each section.
+The progress chart shows how many items have been labeled, skipped, in need of review, or not yet done. Hover over the chart to see the number of items in each section.
The middle section shows the queue of tasks yet to be assigned. If ML-assisted labeling is on, you'll also see the number of pre-labeled items.
On the **Data** tab, you can see your dataset and review labeled data. Scroll th
View and change details of your project. In this tab you can: * View project details and input datasets
-* Enable or disable **incremental refresh at regular intervals** or request an immediate refresh
+* Enable or disable **incremental refresh at regular intervals**, or request an immediate refresh.
* View details of the storage container used to store labeled outputs in your project * Add labels to your project * Edit instructions you give to your labels
View and change details of your project. In this tab you can:
Use the **Export** button on the **Project details** page of your labeling project. You can export the label data for Machine Learning experimentation at any time.
-You can export:
-
+For all project types other than **Text Named Entity Recognition**, you can export:
* A CSV file. The CSV file is created in the default blob store of the Azure Machine Learning workspace in a folder within *Labeling/export/csv*. * An [Azure Machine Learning dataset with labels](how-to-use-labeled-dataset.md). +
+For **Text Named Entity Recognition** projects, you can export:
+* An [Azure Machine Learning dataset with labels](how-to-use-labeled-dataset.md).
+* A CoNLL file. For this export, you'll also have to assign a compute resource. The export process runs offline and generates the file as part of an experiment run. When the file is ready to download, you'll see a notification on the top right. Select this to open the notification, which includes the link to the file.
+
+ :::image type="content" source="media/how-to-create-text-labeling-projects/notification-bar.png" alt-text="Notification for file download.":::
+ Access exported Azure Machine Learning datasets in the **Datasets** section of Machine Learning. The dataset details page also provides sample code to access your labels from Python. ![Exported dataset](./media/how-to-create-labeling-projects/exported-dataset.png)
Access exported Azure Machine Learning datasets in the **Datasets** section of M
[!INCLUDE [troubleshooting](../../includes/machine-learning-data-labeling-troubleshooting.md)] - ## Next steps * [How to tag text](how-to-label-data.md#label-text)
machine-learning How To Label Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-label-data.md
Previously updated : 10/21/2021 Last updated : 03/18/2022
Machine learning algorithms may be triggered during your labeling. If these algo
* For object identification models, you may see bounding boxes and labels already present. Correct any that are incorrect before submitting the page.
- * For segmentation models, you may see polygons and labels already present. Correct any that are incorrect before submitting the page.
+ * For segmentation models, you may see polygons and labels already present. Correct any that are incorrect before submitting the page.
* Text
Here we've chosen a two-by-two layout and are about to apply the tag "Mammal" to
![Multiple image layouts and selection](./media/how-to-label-data/layouts.png)
-> [!Important]
+> [!Important]
> Only switch layouts when you have a fresh page of unlabeled data. Switching layouts clears the page's in-progress tagging work. Azure enables the **Submit** button when you've tagged all the images on the page. Select **Submit** to save your work.
Select the image that you want to label and then select the tag. The tag is appl
![Animation shows multilabel flow](./media/how-to-label-data/multilabel.gif)
-To correct a mistake, click the "**X**" to clear an individual tag or select the images and then select the tag, which clears the tag from all the selected images. This scenario is shown here. Clicking on "Land" will clear that tag from the two selected images.
+To correct a mistake, select the "**X**" to clear an individual tag or select the images and then select the tag, which clears the tag from all the selected images. This scenario is shown here. Selecting "Land" will clear that tag from the two selected images.
![A screenshot shows multiple deselections](./media/how-to-label-data/multiple-deselection.png)
If your project is of type "Object Identification (Bounding Boxes)," you'll spec
1. Select a tag for the bounding box that you plan to create. 1. Select the **Rectangular box** tool ![Rectangular box tool](./media/how-to-label-data/rectangular-box-tool.png) or select "R."
-3. Click and drag diagonally across your target to create a rough bounding box. To adjust the bounding box, drag the edges or corners.
+3. Select and drag diagonally across your target to create a rough bounding box. To adjust the bounding box, drag the edges or corners.
![Bounding box creation](./media/how-to-label-data/bounding-box-sequence.png)
-To delete a bounding box, click the X-shaped target that appears next to the bounding box after creation.
+To delete a bounding box, select the X-shaped target that appears next to the bounding box after creation.
You can't change the tag of an existing bounding box. If you make a tag-assignment mistake, you have to delete the bounding box and create a new one with the correct tag. By default, you can edit existing bounding boxes. The **Lock/unlock regions** tool ![Lock/unlock regions tool](./media/how-to-label-data/lock-bounding-boxes-tool.png) or "L" toggles that behavior. If regions are locked, you can only change the shape or location of a new bounding box.
-Use the **Regions manipulation** tool ![This is the regions manipulation tool icon - four arrows pointing outward from the center, up, right, down, and left.](./media/how-to-label-data/regions-tool.png) or "M" to adjust an existing bounding box. Drag the edges or corners to adjust the shape. Click in the interior to be able to drag the whole bounding box. If you can't edit a region, you've probably toggled the **Lock/unlock regions** tool.
+Use the **Regions manipulation** tool ![This is the regions manipulation tool icon - four arrows pointing outward from the center, up, right, down, and left.](./media/how-to-label-data/regions-tool.png) or "M" to adjust an existing bounding box. Drag the edges or corners to adjust the shape. Select in the interior to be able to drag the whole bounding box. If you can't edit a region, you've probably toggled the **Lock/unlock regions** tool.
Use the **Template-based box** tool ![Template-box tool](./media/how-to-label-data/template-box-tool.png) or "T" to create multiple bounding boxes of the same size. If the image has no bounding boxes and you activate template-based boxes, the tool will produce 50-by-50-pixel boxes. If you create a bounding box and then activate template-based boxes, any new bounding boxes will be the size of the last box that you created. Template-based boxes can be resized after placement. Resizing a template-based box only resizes that particular box.
If your project is of type "Instance Segmentation (Polygon)," you'll specify one
1. Select a tag for the polygon that you plan to create. 1. Select the **Draw polygon region** tool ![Draw polygon region tool](./media/how-to-label-data/polygon-tool.png) or select "P."
-1. Click for each point in the polygon. When you have completed the shape, double click to finish.
+1. Select for each point in the polygon. When you've completed the shape, double-click to finish.
:::image type="content" source="media/how-to-label-data/polygon.gif" alt-text="Create polygons for Cat and Dog":::
-To delete a polygon, click the X-shaped target that appears next to the polygon after creation.
+To delete a polygon, select the X-shaped target that appears next to the polygon after creation.
-If you want to change the tag for a polygon, select the **Move region** tool, click on the polygon, and select the correct tag.
+If you want to change the tag for a polygon, select the **Move region** tool, select the polygon, and select the correct tag.
You can edit existing polygons. The **Lock/unlock regions** tool ![Edit polygons with the lock/unlock regions tool](./media/how-to-label-data/lock-bounding-boxes-tool.png) or "L" toggles that behavior. If regions are locked, you can only change the shape or location of a new polygon.
-Use the **Add or remove polygon points** tool ![This is the add or remove polygon points tool icon.](./media/how-to-label-data/add-remove-points-tool.png) or "U" to adjust an existing polygon. Click on the polygon to add or remove a point. If you can't edit a region, you've probably toggled the **Lock/unlock regions** tool.
+Use the **Add or remove polygon points** tool ![This is the add or remove polygon points tool icon.](./media/how-to-label-data/add-remove-points-tool.png) or "U" to adjust an existing polygon. Select the polygon to add or remove a point. If you can't edit a region, you've probably toggled the **Lock/unlock regions** tool.
To delete *all* polygons in the current image, select the **Delete all regions** tool ![Delete all regions tool](./media/how-to-label-data/delete-regions-tool.png). After you create the polygons for an image, select **Submit** to save your work, or your work in progress won't be saved.
-## <a name="label-text"></a>Label text (preview)
-
-> [!IMPORTANT]
-> Text labeling is in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+## <a name="label-text"></a>Label text
When tagging text, use the toolbar to:
When tagging text, use the toolbar to:
If you realize that you made a mistake after you assign a tag, you can fix it. Select the "**X**" on the label that's displayed below the text to clear the tag.
-There are two text project types:
+There are three text project types:
-
-|Project type |Tagging |
+|Project type | Description |
|||
-| Classification Multi-Class | Assign a single tag to the entire text item. You can only select one tag for each text item. |
-| Classification Multi-Label | Assign one *or more* tags to each text item. You can select multiple tags for each text item. |
+| Classification Multi-Class | Assign a single tag to the entire text entry. You can only select one tag for each text item. Select a tag and then select **Submit** to move to the next entry. |
+| Classification Multi-Label | Assign one *or more* tags to each text entry. You can select multiple tags for each text item. Select all the tags that apply and then select **Submit** to move to the next entry. |
+| Named entity recognition (preview) | Tag different words or phrases in each text entry. See directions in the section below.
+
+> [!IMPORTANT]
+> Named entity recognition is in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
To see the project-specific directions, select **Instructions** and go to **View detailed instructions**.
+### Tag words and phrases (preview)
+
+If your project is set up for named entity recognition, you tag different words or phrases in each text item. To label text:
+
+1. Select the label or type the number corresponding to the appropriate label
+1. Double-click on a word, or use your mouse to select multiple words.
++
+To change a label, you can:
+
+* Delete the label and start again.
+* Change the value for all or some of a specific label in your current item:
+ * Select the label itself, which will select all instances of that label.
+ * Select again on the instances of this label to unselect any instances you don't want to change.
+ * Finally, select a new label to change all the labels that are still selected.
+
+When you've tagged all the items in an entry, select **Submit** to move to the next entry.
+ ## Finish up When you submit a page of tagged data, Azure assigns new unlabeled data to you from a work queue. If there's no more unlabeled data available, you'll get a message noting this along with a link to the portal home page.
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
The Workspace.list(..) method does not return the full workspace object. It incl
+## Search for assets across a workspace (preview)
+
+With the public preview search capability, you can search for machine learning assets such as jobs, models, components, environments, and datasets across all workspaces, resource groups, and subscriptions in your organization through a unified global view.
+
+### Free text search
+
+Type search text into the global search bar on the top of portal and hit enter to trigger a 'contains' search.
+A contains search scans across all metadata fields for the given asset and sorts results relevance.
+
+You can use the asset quick links to navigate to search results for jobs, models, and components that you created.
+
+Also, you can change the scope of applicable subscriptions and workspaces via the 'Change' link in the search bar drop down.
++
+### Structured search
+
+Select any number of filters to create more specific search queries. The following filters are supported:
+
+* Job:
+* Model:
+* Component:
+* Tags:
+* SubmittedBy:
+* Environment:
+* Dataset:
+
+If an asset filter (job, model, component) is present, results are scoped to those tabs. Other filters apply to all assets unless an asset filter is also present in the query. Similarly, free text search can be provided alongside filters, but are scoped to the tabs chosen by asset filters, if present.
+
+> [!TIP]
+> * Filters search for exact matches of text. Use free text queries for a contains search.
+> * Quotations are required around values that include spaces or other special characters.
+> * If duplicate filters are provided, only the first will be recognized in search results.
+> * Input text of any language is supported but filter strings must match the provided options (ex. submittedBy:).
+> * The tags filter can accept multiple key:value pairs separated by a comma (ex. tags:"key1:value1, key2:value2").
+
+### View search results
+
+You can view your search results in the individual **Jobs**, **Models** and **Components** tabs. Select an asset to open its **Details** page in the context of the relevant workspace. Results from workspaces you don't have permissions to view are not displayed.
++
+If you've used this feature in a previous update, a search result error may occur. Reselect your preferred workspaces in the Directory + Subscription + Workspace tab.
+ ## Delete a workspace
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-1st-experiment-bring-data.md
To run this script in Azure Machine Learning, you need to make your training dat
```python # upload-data.py from azureml.core import Workspace
+ from azureml.core import Dataset
+ from azureml.data.datapath import DataPath
+
ws = Workspace.from_config() datastore = ws.get_default_datastore()
- datastore.upload(src_dir='./data',
- target_path='datasets/cifar10',
- overwrite=True)
-
+ Dataset.File.upload_directory(src_dir='data',
+ target=DataPath(datastore, "datasets/cifar10")
+ )
``` The `target_path` value specifies the path on the datastore where the CIFAR10 data will be uploaded.
machine-learning Tutorial First Experiment Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-first-experiment-automated-ml.md
We deploy this model, but be advised, deployment takes about 20 minutes to compl
1. Select **VotingEnsemble** to open the model-specific page.
-1. Select the **Deploy** button in the top-left.
+1. Select the **Deploy** menu in the top-left and select **Deploy to web service**.
1. Populate the **Deploy a model** pane as follows:
We deploy this model, but be advised, deployment takes about 20 minutes to compl
-|- Deployment name| my-automl-deploy Deployment description| My first automated machine learning experiment deployment
- Compute type | Select Azure Compute Instance (ACI)
+ Compute type | Select Azure Container Instance (ACI)
Enable authentication| Disable. Use custom deployments| Disable. Allows for the default driver file (scoring script) and environment file to be auto-generated.
marketplace Insights Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/insights-dashboard.md
Previously updated : 09/27/2021 Last updated : 03/18/2022 # Marketplace Insights dashboard in commercial marketplace analytics
The Marketplace Insights **Visitors** chart displays a count of _Page visits_ an
### Call to actions trend
-This number represents the count of **Call to Action** button clicks completed on the offer listing page (product detail page). _Calls to action_ are counted when users select the **Get It Now**, **Free Trial**, **Contact Me**, or **Test Drive** buttons. *Consent given* represents the total count of clicks for customer-provided consent to Microsoft or the partner, and equals the number of customers acquired for your offers. The following two examples show where *Consent given* clicks appear:
+This number represents the count of **Call to Action** button clicks completed on the offer listing page (product detail page). _Calls to action_ are counted when users select the **Get It Now**, **Free Trial**, **Contact Me**, or **Test Drive** buttons. *Consent given* represents the total count of clicks for customer-provided consent to Microsoft or the partner. The following two examples show where *Consent given* clicks appear:
:::image type="content" source="./media/insights-dashboard/consent-screen.png" alt-text="Illustrates a location where a consent button is selected.":::
marketplace Saas Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/saas-metered-billing.md
As an example, Contoso is a publisher with a SaaS service called Contoso Notific
- Beyond the 10000 emails, pay $1 for every 100 emails - Beyond the 1000 texts, pay $0.02 for every text
- [![Basic plan pricing](./media/saas-basic-pricing.png "Click for enlarged view")](./media/saas-basic-pricing.png)
+ [ ![Screenshot of basic plan pricing.](./media/saas-basic-pricing.png) ](./media/saas-basic-pricing.png#lightbox)
- Premium plan - Send 50000 emails and 10000 texts for $350/month or 5M emails and 1M texts for $3500 per year - Beyond the 50000 emails, pay $0.5 for every 100 emails - Beyond the 10000 texts, pay $0.01 for every text
- [![Premium plan pricing](./media/saas-premium-pricing.png "Click for enlarged view")](./media/saas-premium-pricing.png)
+ [ ![Screenshot of premium plan pricing.](./media/saas-premium-pricing.png) ](./media/saas-premium-pricing.png#lightbox)
- Enterprise plan - Send unlimited number of emails and 50000 texts for $400/month - Beyond the 50000 texts pay $0.005 for every txt
- [![Enterprise plan pricing](./media/saas-enterprise-pricing.png "Click for enlarged view")](./media/saas-enterprise-pricing.png)
+ [ ![Screenshot of enterprise plan pricing.](./media/saas-enterprise-pricing.png) ](./media/saas-enterprise-pricing.png#lightbox)
Based on the plan selected, an Azure customer purchasing subscription to CNS SaaS offer will be able to send the included quantity of text and emails per subscription term (month or year as appears in subscription detailsΓÇöstartDate and endDate). Contoso counts the usage up to the included quantity in base without sending any usage events to Microsoft. When customers consume more than the included quantity, they do not have to change plans or do anything different. Contoso will measure the overage beyond the included quantity and start emitting usage events to Microsoft for charging the overage usage using the [commercial marketplace metering service API](../marketplace-metering-service-apis.md). Microsoft in turn will charge the customer for the overage usage as specified by the publisher in the custom dimensions. The overage billing is done on the next billing cycle (monthly, but can be quarterly or early for some customers). For a monthly flat rate plan, the overage billing will be made for every month where overage has occurred. For a yearly flat rate plan, once the quantity included in base per year is consumed, all additional usage emitted by the custom meter will be billed as overage during each billing cycle (monthly) until the end of the subscription's year term.
Like flat rate pricing, billing dimension prices can be set per supported countr
The user interface of the meter will change to reflect that the prices of the dimension can only be seen in the file.
-[![commercial marketplace metering service dimensions](media/metering-service-dimensions.png "Click for enlarged view")](media/metering-service-dimensions.png)
+[ ![Screenshot of commercial marketplace metering service dimensions.](media/metering-service-dimensions.png) ](media/metering-service-dimensions.png#lightbox)
### Private plan
migrate Tutorial Discover Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-aws.md
ms. Last updated 03/11/2021-+ #Customer intent: As a server admin I want to discover my AWS instances.
If you just created a free Azure account, you're the owner of your subscription.
![Image of Search box to search for the Azure subscription.](./media/tutorial-discover-aws/search-subscription.png)
-2. In the **Subscriptions** page, select the subscription in which you want to create a project.
-3. In the subscription, select **Access control (IAM)** > **Check access**.
-4. In **Check access**, search for the relevant user account.
-5. In **Add a role assignment**, click **Add**.
+1. In the **Subscriptions** page, select the subscription in which you want to create a project.
- ![Screenshot of process to search for a user account to check access and assign a role.](./media/tutorial-discover-aws/azure-account-access.png)
+1. Select **Access control (IAM)**.
-6. In **Add role assignment**, select the Contributor or Owner role, and select the account (azmigrateuser in our example). Then click **Save**.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
- ![Screenshot of the Add Role assignment page to assign a role to the account.](./media/tutorial-discover-aws/assign-role.png)
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Contributor or Owner |
+ | Assign access to | User |
+ | Members | azmigrateuser |
+
+ ![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
1. To register the appliance, your Azure account needs **permissions to register Azure Active Directory apps.**+ 1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.+ 1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default). ![Image to Verify in User Settings that users can register Active Directory apps](./media/tutorial-discover-aws/register-apps.png)
Now, connect from the appliance to the physical servers to be discovered, and st
- If validation fails for a server, review the error by clicking on **Validation failed** in the Status column of the table. Fix the issue, and validate again. - To remove a server, click on **Delete**. 1. You can **revalidate** the connectivity to servers anytime before starting the discovery.
-1. Before initiating discovery, you can choose to disable the slider to not perform software inventory and agentless dependency analysis on the added servers.You can change this option at any time.
+1. Before initiating discovery, you can choose to disable the slider to not perform software inventory and agentless dependency analysis on the added servers. You can change this option at any time.
:::image type="content" source="./media/tutorial-discover-physical/disable-slider.png" alt-text="Screenshot that shows where to disable the slider.":::
migrate Tutorial Discover Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-gcp.md
ms. Last updated 03/13/2021-+ #Customer intent: As a server admin I want to discover my GCP instances.
If you just created a free Azure account, you're the owner of your subscription.
![Screenshot of Search box to search for the Azure subscription.](./media/tutorial-discover-gcp/search-subscription.png)
-2. In the **Subscriptions** page, select the subscription in which you want to create a project.
-3. In the subscription, select **Access control (IAM)** > **Check access**.
-4. In **Check access**, search for the relevant user account.
-5. In **Add a role assignment**, click **Add**.
+1. In the **Subscriptions** page, select the subscription in which you want to create a project.
- ![Image to Search for a user account to check access and assign a role.](./media/tutorial-discover-gcp/azure-account-access.png)
+1. Select **Access control (IAM)**.
-6. In **Add role assignment**, select the Contributor or Owner role, and select the account (azmigrateuser in our example). Then click **Save**.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
- ![Screenshot of Add Role assignment page to assign a role to the account.](./media/tutorial-discover-gcp/assign-role.png)
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Contributor or Owner |
+ | Assign access to | User |
+ | Members | azmigrateuser |
+
+ ![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
1. To register the appliance, your Azure account needs **permissions to register Azure Active Directory apps.**+ 1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.+ 1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default). ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-gcp/register-apps.png)
Now, connect from the appliance to the GCP servers to be discovered, and start t
- If validation fails for a server, review the error by clicking on **Validation failed** in the Status column of the table. Fix the issue, and validate again. - To remove a server, click on **Delete**. 6. You can **revalidate** the connectivity to servers anytime before starting the discovery.
-1. Before initiating discovery, you can choose to disable the slider to not perform software inventory and agentless dependency analysis on the added servers.You can change this option at any time.
+1. Before initiating discovery, you can choose to disable the slider to not perform software inventory and agentless dependency analysis on the added servers. You can change this option at any time.
:::image type="content" source="./media/tutorial-discover-physical/disable-slider.png" alt-text="Screenshot that shows where to disable the slider.":::
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
ms. Last updated 11/12/2021-+ #Customer intent: As a Hyper-V admin, I want to discover my on-premises servers on Hyper-V.
If you just created a free Azure account, you're the owner of your subscription.
![Screenshot of Search box to search for the Azure subscription.](./media/tutorial-discover-hyper-v/search-subscription.png)
-2. In the **Subscriptions** page, select the subscription in which you want to create a project.
-3. In the subscription, select **Access control (IAM)** > **Check access**.
-4. In **Check access**, search for the relevant user account.
-5. In **Add a role assignment**, click **Add**.
+1. In the **Subscriptions** page, select the subscription in which you want to create a project.
- ![Screenshot of Search for a user account to check access and assign a role.](./media/tutorial-discover-hyper-v/azure-account-access.png)
+1. Select **Access control (IAM)**.
-6. In **Add role assignment**, select the Contributor or Owner role, and select the account (azmigrateuser in our example). Then click **Save**.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
- ![Screenshot of the Add Role assignment page to assign a role to the account.](./media/tutorial-discover-hyper-v/assign-role.png)
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Contributor or Owner |
+ | Assign access to | User |
+ | Members | azmigrateuser |
+
+ ![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
1. To register the appliance, your Azure account needs **permissions to register Azure Active Directory apps.**+ 1. In the Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.+ 1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default). ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-hyper-v/register-apps.png)
-9. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare Hyper-V hosts
migrate Tutorial Discover Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-import.md
ms. Last updated 09/14/2020+ #Customer intent: As a server admin, I want to discover servers using an imported CSV file.
If you just created a free Azure account, you're the owner of your subscription.
![Search box to search for the Azure subscription](./media/tutorial-discover-import/search-subscription.png)
-2. In the **Subscriptions** page, select the subscription in which you want to create an Azure Migrate project.
-3. In the subscription, select **Access control (IAM)** > **Check access**.
-4. In **Check access**, search for the relevant user account.
-5. In **Add a role assignment**, select **Add**.
+1. In the **Subscriptions** page, select the subscription in which you want to create an Azure Migrate project.
- ![Search for a user account to check access and assign a role](./media/tutorial-discover-import/azure-account-access.png)
+1. Select **Access control (IAM)**.
-6. In **Add role assignment**, select the Contributor or Owner role, and select the account (azmigrateuser in our example). Then select **Save**.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
- ![Opens the Add Role assignment page to assign a role to the account](./media/tutorial-discover-import/assign-role.png)
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-7. In the portal, search for users, and under **Services**, select **Users**.
-8. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
+ | Setting | Value |
+ | | |
+ | Role | Contributor or Owner |
+ | Assign access to | User |
+ | Members | azmigrateuser |
+
+ ![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+
+1. In the portal, search for users, and under **Services**, select **Users**.
+
+1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
![Verify in User Settings that users can register Active Directory apps](./media/tutorial-discover-import/register-apps.png)
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
ms. Last updated 11/12/2021-+ #Customer intent: As a server admin I want to discover my on-premises server inventory.
If you just created a free Azure account, you're the owner of your subscription.
![Screenshot of search box to search for the Azure subscription.](./media/tutorial-discover-physical/search-subscription.png)
-2. In the **Subscriptions** page, select the subscription in which you want to create the project.
-3. In the subscription, select **Access control (IAM)** > **Check access**.
-4. In **Check access**, search for the relevant user account.
-5. In **Add a role assignment**, click **Add**.
+1. Select **Access control (IAM)**.
- ![Screenshot of searching for a user account to check access and assign a role.](./media/tutorial-discover-physical/azure-account-access.png)
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-6. In **Add role assignment**, select the Contributor or Owner role, and select the account (azmigrateuser in our example). Then click **Save**.
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
- ![Screenshot of the Add Role assignment page to assign a role to the account.](./media/tutorial-discover-physical/assign-role.png)
+ | Setting | Value |
+ | | |
+ | Role | Contributor or Owner |
+ | Assign access to | User |
+ | Members | azmigrateuser |
+
+ ![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
1. To register the appliance, your Azure account needs **permissions to register Azure Active Directory apps.**+ 1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.+ 1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default). ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-physical/register-apps.png)
-9. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare physical servers
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
ms. Last updated 11/12/2021-+ #Customer intent: As an VMware admin, I want to discover my on-premises servers running in a VMware environment.
To set Contributor or Owner permissions in the Azure subscription:
:::image type="content" source="./media/tutorial-discover-vmware/search-subscription.png" alt-text="Screenshot that shows how to search for an Azure subscription in the search box."::: 1. In **Subscriptions**, select the subscription in which you want to create a project.
-1. In the left menu, select **Access control (IAM)**.
-1. On the **Check access** tab, under **Check access**, search for the user account you want to use.
-1. In the **Add a role assignment** pane, select **Add**.
- :::image type="content" source="./media/tutorial-discover-vmware/azure-account-access.png" alt-text="Screenshot that shows how to search for a user account to check access and add a role assignment.":::
-
-1. In **Add role assignment**, select the Contributor or Owner role, and then select the account. Select **Save**.
+1. Select **Access control (IAM)**.
+
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
- :::image type="content" source="./media/tutorial-discover-vmware/assign-role.png" alt-text="Screenshot that shows the Add role assignment page to assign a role to the account.":::
+ | Setting | Value |
+ | | |
+ | Role | Contributor or Owner |
+ | Assign access to | User |
+ | Members | azmigrateuser |
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Add role assignment page in Azure portal.":::
To give the account the required permissions to register Azure AD apps: 1. In the portal, go to **Azure Active Directory** > **Users** > **User Settings**.+ 1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default). :::image type="content" source="./media/tutorial-discover-vmware/register-apps.png" alt-text="Screenshot that shows verifying user setting to register apps.":::
-9. If **App registrations** is set to **No**, request the tenant or global admin to assign the required permissions. Alternately, the tenant or global admin can assign the Application Developer role to an account to allow Azure AD app registration by users. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+1. If **App registrations** is set to **No**, request the tenant or global admin to assign the required permissions. Alternately, the tenant or global admin can assign the Application Developer role to an account to allow Azure AD app registration by users. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare VMware
openshift Quickstart Openshift Arm Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-openshift-arm-bicep-template.md
+
+ Title: 'Quickstart: Deploy an Azure Red Hat OpenShift cluster with an ARM template or Bicep'
+description: In this Quickstart, learn how to create an Azure Red Hat OpenShift cluster using an Azure Resource Manager template or a Bicep file.
+++++ Last updated : 03/17/2022
+keywords: azure, openshift, aro, red hat, arm, bicep
+#Customer intent: I need to use ARM templates or Bicep files to deploy my Azure Red Hat OpenShift cluster.
+zone_pivot_groups: azure-red-hat-openshift
++
+# Quickstart: Deploy an Azure Red Hat OpenShift cluster with an Azure Resource Manager template or Bicep file
+
+This quickstart describes how to use either Azure Resource Manager template (ARM template) or Bicep to create an Azure Red Hat OpenShift cluster. You can deploy the Azure Red Hat OpenShift cluster with either PowerShell or the Azure command-line interface (Azure CLI).
++
+Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner.
+
+## Prerequisites
+
+* Install [Azure CLI](/cli/azure/install-azure-cli)
++
+* [Bicep](../azure-resource-manager/bicep/install.md)
++
+* An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/).
+
+* Ability to assign User Access Administrator and Contributor roles. If you lack this ability, contact your Azure Active Directory admin to manage roles.
+
+* A Red Hat account. If you don't have one, you'll have to [register for an account](https://www.redhat.com/wapps/ugc/register.html).
+
+* A pull secret for your Azure Red Hat OpenShift cluster. [Download the pull secret file from the Red Hat OpenShift Cluster Manager web site](https://cloud.redhat.com/openshift/install/azure/aro-provisioned).
+
+* If you want to run the Azure PowerShell code locally, [Azure PowerShell](/powershell/azure/install-az-ps).
+
+* If you want to run the Azure CLI code locally:
+ * A Bash shell (such as Git Bash, which is included in [Git for Windows](https://gitforwindows.org)).
+ * [Azure CLI](/cli/azure/install-azure-cli).
++
+## Create an ARM template or Bicep file
++
+Choose either an Azure Resource Manager template (ARM template) or an Azure Bicep file. Then, you can deploy the template using either the Azure command line (azure-cli) or PowerShell.
++
+### Create an ARM template
+
+The following example shows how your ARM template should look when configured for your Azure RedHat OpenShift cluster.
+
+The template defines three Azure resources:
+
+* [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+* [**Microsoft.Network/virtualNetworks/providers/roleAssignments**](/azure/templates/microsoft.network/virtualnetworks/providers/roleassignments)
+* [**Microsoft.RedHatOpenShift/OpenShiftClusters**](/azure/templates/microsoft.redhatopenshift/openshiftclusters)
+
+More Azure Red Hat OpenShift template samples can be found on the [Red Hat OpenShift web site](https://docs.openshift.com/container-platform/4.9/installing/installing_azure/installing-azure-user-infra.html).
+
+Save the following example as *azuredeploy.json*:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location" : {
+ "type": "string",
+ "defaultValue": "eastus",
+ "metadata": {
+ "description": "Location"
+ }
+ },
+ "domain": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "Domain Prefix"
+ }
+ },
+ "pullSecret": {
+ "type": "string",
+ "metadata": {
+ "description": "Pull secret from cloud.redhat.com. The json should be input as a string"
+ }
+ },
+ "clusterVnetName": {
+ "type": "string",
+ "defaultValue": "aro-vnet",
+ "metadata": {
+ "description": "Name of ARO vNet"
+ }
+ },
+ "clusterVnetCidr": {
+ "type": "string",
+ "defaultValue": "10.100.0.0/15",
+ "metadata": {
+ "description": "ARO vNet Address Space"
+ }
+ },
+ "workerSubnetCidr": {
+ "type": "string",
+ "defaultValue": "10.100.70.0/23",
+ "metadata": {
+ "description": "Worker node subnet address space"
+ }
+ },
+ "masterSubnetCidr": {
+ "type": "string",
+ "defaultValue": "10.100.76.0/24",
+ "metadata": {
+ "description": "Master node subnet address space"
+ }
+ },
+ "masterVmSize" : {
+ "type": "string",
+ "defaultValue": "Standard_D8s_v3",
+ "metadata": {
+ "description": "Master Node VM Type"
+ }
+ },
+ "workerVmSize": {
+ "type": "string",
+ "defaultValue": "Standard_D4s_v3",
+ "metadata": {
+ "description": "Worker Node VM Type"
+ }
+ },
+ "workerVmDiskSize": {
+ "type" : "int",
+ "defaultValue": 128,
+ "minValue": 128,
+ "metadata": {
+ "description": "Worker Node Disk Size in GB"
+ }
+ },
+ "workerCount": {
+ "type": "int",
+ "defaultValue": 3,
+ "minValue": 3,
+ "metadata": {
+ "description": "Number of Worker Nodes"
+ }
+ },
+ "podCidr": {
+ "type": "string",
+ "defaultValue": "10.128.0.0/14",
+ "metadata": {
+ "description": "Cidr for Pods"
+ }
+ },
+ "serviceCidr": {
+ "type": "string",
+ "defaultValue": "172.30.0.0/16",
+ "metadata": {
+ "decription": "Cidr of service"
+ }
+ },
+ "clusterName" : {
+ "type": "string",
+ "metadata": {
+ "description": "Unique name for the cluster"
+ }
+ },
+ "tags": {
+ "type": "object",
+ "defaultValue" : {
+ "env": "Dev",
+ "dept": "Ops"
+ },
+ "metadata": {
+ "description": "Tags for resources"
+ }
+ },
+ "apiServerVisibility": {
+ "type": "string",
+ "allowedValues": [
+ "Private",
+ "Public"
+ ],
+ "defaultValue": "Public",
+ "metadata": {
+ "description": "Api Server Visibility"
+ }
+ },
+ "ingressVisibility": {
+ "type": "string",
+ "allowedValues": [
+ "Private",
+ "Public"
+ ],
+ "defaultValue": "Public",
+ "metadata": {
+ "description": "Ingress Visibility"
+ }
+ },
+ "aadClientId" : {
+ "type": "string",
+ "metadata": {
+ "description": "The Application ID of an Azure Active Directory client application"
+ }
+ },
+ "aadObjectId": {
+ "type": "string",
+ "metadata": {
+ "description": "The Object ID of an Azure Active Directory client application"
+ }
+ },
+ "aadClientSecret" : {
+ "type":"securestring",
+ "metadata": {
+ "description": "The secret of an Azure Active Directory client application"
+ }
+ },
+ "rpObjectId": {
+ "type": "String",
+ "metadata": {
+ "description": "The ObjectID of the Resource Provider Service Principal"
+ }
+ }
+ },
+ "variables": {
+ "contribRole": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', 'b24988ac-6180-42a0-ab88-20f7382dd24c')]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Network/virtualNetworks",
+ "apiVersion": "2020-05-01",
+ "name": "[parameters('clusterVnetName')]",
+ "location": "[parameters('location')]",
+ "tags": "[parameters('tags')]",
+ "properties": {
+ "addressSpace": {
+ "addressPrefixes": [
+ "[parameters('clusterVnetCidr')]"
+ ]
+ },
+ "subnets": [
+ {
+ "name": "master",
+ "properties": {
+ "addressPrefix": "[parameters('masterSubnetCidr')]",
+ "serviceEndpoints": [
+ {
+ "service": "Microsoft.ContainerRegistry"
+ }
+ ],
+ "privateLinkServiceNetworkPolicies": "Disabled"
+ }
+ },
+ {
+ "name": "worker",
+ "properties": {
+ "addressPrefix": "[parameters('workerSubnetCidr')]",
+ "serviceEndpoints": [
+ {
+ "service": "Microsoft.ContainerRegistry"
+ }
+ ]
+ }
+ }]
+ }
+ },
+ {
+ "type": "Microsoft.Network/virtualNetworks/providers/roleAssignments",
+ "apiVersion": "2018-09-01-preview",
+ "name": "[concat(parameters('clusterVnetName'), '/Microsoft.Authorization/', guid(resourceGroup().id, deployment().name, parameters('aadObjectId')))]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/virtualNetworks', parameters('clusterVnetName'))]"
+ ],
+ "properties": {
+ "roleDefinitionId": "[variables('contribRole')]",
+ "principalId":"[parameters('aadObjectId')]"
+ }
+ },
+ {
+ "type": "Microsoft.Network/virtualNetworks/providers/roleAssignments",
+ "apiVersion": "2018-09-01-preview",
+ "name": "[concat(parameters('clusterVnetName'), '/Microsoft.Authorization/', guid(resourceGroup().id, deployment().name, parameters('rpObjectId')))]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/virtualNetworks', parameters('clusterVnetName'))]"
+ ],
+ "properties": {
+ "roleDefinitionId": "[variables('contribRole')]",
+ "principalId":"[parameters('rpObjectId')]"
+ }
+ },
+ {
+ "type": "Microsoft.RedHatOpenShift/OpenShiftClusters",
+ "apiVersion": "2020-04-30",
+ "name": "[parameters('clusterName')]",
+ "location": "[parameters('location')]",
+ "tags": "[parameters('tags')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/virtualNetworks', parameters('clusterVnetName'))]"
+ ],
+ "properties": {
+ "clusterProfile": {
+ "domain": "[parameters('domain')]",
+ "resourceGroupId": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/aro-', parameters('domain'))]",
+ "pullSecret": "[parameters('pullSecret')]"
+ },
+ "networkProfile": {
+ "podCidr": "[parameters('podCidr')]",
+ "serviceCidr": "[parameters('serviceCidr')]"
+ },
+ "servicePrincipalProfile": {
+ "clientId": "[parameters('aadClientId')]",
+ "clientSecret": "[parameters('aadClientSecret')]"
+ },
+ "masterProfile": {
+ "vmSize": "[parameters('masterVmSize')]",
+ "subnetId": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('clusterVnetName'), 'master')]"
+ },
+ "workerProfiles": [
+ {
+ "name": "worker",
+ "vmSize": "[parameters('workerVmSize')]",
+ "diskSizeGB": "[parameters('workerVmDiskSize')]",
+ "subnetId": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('clusterVnetName'), 'worker')]",
+ "count": "[parameters('workerCount')]"
+ }
+ ],
+ "apiserverProfile": {
+ "visibility": "[parameters('apiServerVisibility')]"
+ },
+ "ingressProfiles": [
+ {
+ "name": "default",
+ "visibility": "[parameters('ingressVisibility')]"
+ }
+ ]
+ }
+ }
+ ],
+ "outputs": {
+ "clusterCredentials": {
+ "type": "object",
+ "value": "[listCredentials(resourceId('Microsoft.RedHatOpenShift/OpenShiftClusters', parameters('clusterName')), '2020-04-30')]"
+ },
+ "oauthCallbackURL": {
+ "type": "string",
+ "value": "[concat('https://oauth-openshift.apps.', parameters('domain'), '.', parameters('location'), '.aroapp.io/oauth2callback/AAD')]"
+ }
+ }
+}
+```
+++
+### Create a Bicep file
+
+The following example shows how your Azure Bicep file should look when configured for your Azure Red Hat OpenShift cluster.
+
+The Bicep file defines three Azure resources:
+
+* [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks)
+* [Microsoft.Network/virtualNetworks/providers/roleAssignments](/azure/templates/microsoft.network/virtualnetworks/providers/roleassignments)
+* [Microsoft.RedHatOpenShift/OpenShiftClusters](/azure/templates/microsoft.redhatopenshift/openshiftclusters)
+
+More Azure Red Hat OpenShift templates can be found on the [Red Hat OpenShift web site](https://docs.openshift.com/container-platform/4.9/installing/installing_azure/installing-azure-user-infra.html).
+
+Create the following Bicep file containing the definition for the Azure Red Hat OpenShift cluster. The following example shows how your Bicep file should look when configured.
+
+Save the following file as *azuredeploy.json*:
+
+```bicep
+@description('Location')
+param location string = 'eastus'
+
+@description('Domain Prefix')
+param domain string = ''
+
+@description('Pull secret from cloud.redhat.com. The json should be input as a string')
+param pullSecret string
+
+@description('Name of ARO vNet')
+param clusterVnetName string = 'aro-vnet'
+
+@description('ARO vNet Address Space')
+param clusterVnetCidr string = '10.100.0.0/15'
+
+@description('Worker node subnet address space')
+param workerSubnetCidr string = '10.100.70.0/23'
+
+@description('Master node subnet address space')
+param masterSubnetCidr string = '10.100.76.0/24'
+
+@description('Master Node VM Type')
+param masterVmSize string = 'Standard_D8s_v3'
+
+@description('Worker Node VM Type')
+param workerVmSize string = 'Standard_D4s_v3'
+
+@description('Worker Node Disk Size in GB')
+@minValue(128)
+param workerVmDiskSize int = 128
+
+@description('Number of Worker Nodes')
+@minValue(3)
+param workerCount int = 3
+
+@description('Cidr for Pods')
+param podCidr string = '10.128.0.0/14'
+
+@metadata({
+ description: 'Cidr of service'
+})
+param serviceCidr string = '172.30.0.0/16'
+
+@description('Unique name for the cluster')
+param clusterName string
+
+@description('Tags for resources')
+param tags object = {
+ env: 'Dev'
+ dept: 'Ops'
+}
+
+@description('Api Server Visibility')
+@allowed([
+ 'Private'
+ 'Public'
+])
+param apiServerVisibility string = 'Public'
+
+@description('Ingress Visibility')
+@allowed([
+ 'Private'
+ 'Public'
+])
+param ingressVisibility string = 'Public'
+
+@description('The Application ID of an Azure Active Directory client application')
+param aadClientId string
+
+@description('The Object ID of an Azure Active Directory client application')
+param aadObjectId string
+
+@description('The secret of an Azure Active Directory client application')
+@secure()
+param aadClientSecret string
+
+@description('The ObjectID of the Resource Provider Service Principal')
+param rpObjectId string
+
+var contribRole = '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c'
+
+resource clusterVnetName_resource 'Microsoft.Network/virtualNetworks@2020-05-01' = {
+ name: clusterVnetName
+ location: location
+ tags: tags
+ properties: {
+ addressSpace: {
+ addressPrefixes: [
+ clusterVnetCidr
+ ]
+ }
+ subnets: [
+ {
+ name: 'master'
+ properties: {
+ addressPrefix: masterSubnetCidr
+ serviceEndpoints: [
+ {
+ service: 'Microsoft.ContainerRegistry'
+ }
+ ]
+ privateLinkServiceNetworkPolicies: 'Disabled'
+ }
+ }
+ {
+ name: 'worker'
+ properties: {
+ addressPrefix: workerSubnetCidr
+ serviceEndpoints: [
+ {
+ service: 'Microsoft.ContainerRegistry'
+ }
+ ]
+ }
+ }
+ ]
+ }
+}
+
+resource clusterVnetName_Microsoft_Authorization_id_name_aadObjectId 'Microsoft.Network/virtualNetworks/providers/roleAssignments@2018-09-01-preview' = {
+ name: '${clusterVnetName}/Microsoft.Authorization/${guid(resourceGroup().id, deployment().name, aadObjectId)}'
+ properties: {
+ roleDefinitionId: contribRole
+ principalId: aadObjectId
+ }
+ dependsOn: [
+ clusterVnetName_resource
+ ]
+}
+
+resource clusterVnetName_Microsoft_Authorization_id_name_rpObjectId 'Microsoft.Network/virtualNetworks/providers/roleAssignments@2018-09-01-preview' = {
+ name: '${clusterVnetName}/Microsoft.Authorization/${guid(resourceGroup().id, deployment().name, rpObjectId)}'
+ properties: {
+ roleDefinitionId: contribRole
+ principalId: rpObjectId
+ }
+ dependsOn: [
+ clusterVnetName_resource
+ ]
+}
+
+resource clusterName_resource 'Microsoft.RedHatOpenShift/OpenShiftClusters@2020-04-30' = {
+ name: clusterName
+ location: location
+ tags: tags
+ properties: {
+ clusterProfile: {
+ domain: domain
+ resourceGroupId: '/subscriptions/${subscription().subscriptionId}/resourceGroups/aro-${domain}-${location}'
+ pullSecret: pullSecret
+ }
+ networkProfile: {
+ podCidr: podCidr
+ serviceCidr: serviceCidr
+ }
+ servicePrincipalProfile: {
+ clientId: aadClientId
+ clientSecret: aadClientSecret
+ }
+ masterProfile: {
+ vmSize: masterVmSize
+ subnetId: resourceId('Microsoft.Network/virtualNetworks/subnets', clusterVnetName, 'master')
+ }
+ workerProfiles: [
+ {
+ name: 'worker'
+ vmSize: workerVmSize
+ diskSizeGB: workerVmDiskSize
+ subnetId: resourceId('Microsoft.Network/virtualNetworks/subnets', clusterVnetName, 'worker')
+ count: workerCount
+ }
+ ]
+ apiserverProfile: {
+ visibility: apiServerVisibility
+ }
+ ingressProfiles: [
+ {
+ name: 'default'
+ visibility: ingressVisibility
+ }
+ ]
+ }
+ dependsOn: [
+ clusterVnetName_resource
+ ]
+}
+```
+++
+## Deploy the azuredeploy.json template
+
+This section provides information on deploying the azuredeploy.json template.
+
+### azuredeploy.json parameters
+
+The azuredeploy.json template is used to deploy an Azure Red Hat OpenShift cluster. The following parameters are required.
+
+| Property | Description | Valid Options | Default Value |
+|-|-|||
+| `domain` |The domain prefix for the cluster. | | none |
+| `pullSecret` | The pull secret that you obtained from the Red Hat OpenShift Cluster Manager web site.
+| `clusterName` | The name of the cluster. | |
+| `aadClientId` | The application ID (a GUID) of an Azure Active Directory (Azure AD) client application. | |
+| `aadObjectId` | The object ID (a GUID) of the service principal for the Azure AD client application. | |
+| `aadClientSecret` | The client secret of the service principal for the Azure AD client application, as a secure string. | |
+| `rpObjectId` | The object ID (a GUID) of the resource provider service principal. | |
+
+The template parameters below have default values. They can be specified, but they aren't explicitly required.
+
+| Property | Description | Valid Options | Default Value |
+|-|-|||
+| `location` | The location of the new ARO cluster. This location can be the same as or different from the resource group region. | | eastus
+| `clusterVnetName` | The name of the virtual network for the ARO cluster. | | aro-vnet
+| `clusterVnetCidr` | The address space of the ARO virtual network, in [Classless Inter-Domain Routing](https://wikipedia.org/wiki/Classless_Inter-Domain_Routing) (CIDR) notation. | | 10.100.0.0/15
+| `workerSubnetCidr` | The address space of the worker node subnet, in CIDR notation. | | 10.100.70.0/23
+| `masterSubnetCidr` | The address space of the control plane node subnet, in CIDR notation. | | 10.100.76.0/24
+| `masterVmSize` | The [virtual machine type/size](../virtual-machines/sizes.md) of the control plane node. | | Standard_D8s_v3
+| `workerVmSize` | The virtual machine type/size of the worker node. | | Standard_D4s_v3
+| `workerVmDiskSize` | The disk size of the worker node, in gigabytes. | | 128
+| `workerCount` | The number of worker nodes. | | 3
+| `podCidr` | The address space of the pods, in CIDR notation. | | 10.128.0.0/14
+| `serviceCidr` | The address space of the service, in CIDR notation. | | 172.30.0.0/16
+| `tags` | A hash table of resource tags. | | `@{env = 'Dev'; dept = 'Ops'}`
+| `apiServerVisibility` | The visibility of the API server (`Public` or `Private`). | | Public
+| `ingressVisibility` | The ingress (entrance) visibility (`Public` or `Private`). | | Public
+
+The following sections provide instructions using PowerShell or Azure CLI.
+
+## PowerShell steps
+
+Perform the following steps if you are using PowerShell.
+
+### Before you begin - PowerShell
+
+Before running the commands in this quickstart, you might need to run `Connect-AzAccount`. Check to determine whether you have connectivity to Azure before proceeding. To check whether you have connectivity, run `Get-AzContext` to verify whether you have access to an active Azure subscription.
+
+> [!NOTE]
+> This template uses the pull secret text that was obtained from the Red Hat OpenShift Cluster Manager website. Before proceeding, ensure you have the pull secret saved locally as `pull-secret.txt`.
+
+```powershell
+$rhosPullSecret= Get-Content .\pull-secret.txt -Raw # the pull secret text that was obtained from the Red Hat OpenShift Cluster Manager website
+```
+
+### Define the following parameters as environment variables - PowerShell
+
+```powershell
+$resourceGroup="aro-rg" # the new resource group for the cluster
+$location="eastus" # the location of the new ARO cluster
+$domain="mydomain" # the domain prefix for the cluster
+$aroClusterName="cluster" # the name of the cluster
+```
+
+### Register the required resource providers - PowerShell
+
+Register the following resource providers in your subscription: `Microsoft.RedHatOpenShift`, `Microsoft.Compute`, `Microsoft.Storage` and `Microsoft.Authorization`.
+
+```powershell
+Register-AzResourceProvider -ProviderNamespace Microsoft.RedHatOpenShift
+Register-AzResourceProvider -ProviderNamespace Microsoft.Compute
+Register-AzResourceProvider -ProviderNamespace Microsoft.Storage
+Register-AzResourceProvider -ProviderNamespace Microsoft.Authorization
+```
+
+### Create the new resource group - PowerShell
+
+```powershell
+New-AzResourceGroup -Name $resourceGroup -Location $location
+```
+
+### Create a new service principal and assign roles - PowerShell
+
+```powershell
+$suffix=Get-Random # random suffix for the Service Principal
+$spDisplayName="sp-$resourceGroup-$suffix"
+$azureADAppSp = New-AzADServicePrincipal -DisplayName $displayName -Role Contributor
+
+New-AzRoleAssignment -ObjectId $azureADAppSp.Id -RoleDefinitionName 'User Access Administrator' -ResourceGroupName $resourceGroup -ObjectType 'ServicePrincipal'
+New-AzRoleAssignment -ObJectId $azureADAppSp.Id -RoleDefinitionName 'Contributor' -ResourceGroupName $resourceGroup -ObjectType 'ServicePrincipal'
+```
+
+### Get the Service Principal password - PowerShell
+
+```powershell
+$aadClientSecretDigest = ConvertTo-SecureString -String $azureADAppSp.PasswordCredentials.SecretText -AsPlainText -Force
+$aadClientSecretDigest = ConvertTo-SecureString -String $azureADAppSp.PasswordCredentials.SecretText -AsPlainText -Force
+```
+
+### Get the service principal for the OpenShift resource provider - PowerShell
+
+```powershell
+$rpOpenShift = Get-AzADServicePrincipal -DisplayName 'Azure Red Hat OpenShift RP' | Select-Object -ExpandProperty Id -Property Id -First 1
+```
+
+### Check the parameters before deploying the cluster - PowerShell
+
+```powershell
+# setup the parameters for the deployment
+$templateParams = @{
+ domain = $domain
+ clusterName = $aroClusterName
+ location = $location
+ aadClientId = $azureADAppSp.AppId
+ aadObjectId = $azureADAppSp.Id
+ aadClientSecret = $aadClientSecretDigest
+ rpObjectId = $rpOpenShift.Id
+ pullSecret = $rhosPullSecret
+}
+
+Write-Verbose (ConvertTo-Json $templateParams) -Verbose
+```
+
+### Deploy the Azure Red Hat OpenShift cluster using the ARM template - PowerShell
+
+```powershell
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroup @templateParams `
+ -TemplateParameterFile azuredeploy.json
+```
++
+### Connect to your cluster - PowerShell
+
+To connect to your new cluster, review the steps in [Connect to an Azure Red Hat OpenShift 4 cluster](tutorial-connect-cluster.md).
+
+### Clean up resources - PowerShell
+
+Once you're done, run the following command to delete your resource group and all the resources you created in this tutorial.
+
+```powershell
+Remove-AzResourceGroup -Name $resourceGroup -Force
+```
+## Azure CLI steps
+
+Perform the following steps if you are using Azure CLI.
+
+### Before you begin - Azure CLI
+
+You might need to run `az login` before running the commands in this quickstart. Check whether you have connectivity to Azure before proceeding. To check whether you have connectivity, run `az account list` and verify that you have access to an active Azure subscription.
+
+> [!NOTE]
+> This template will use the pull secret text that was obtained from the Red Hat OpenShift Cluster Manager website. Before proceeding
+> make sure you have that secret saved locally as `pull-secret.txt`.
+
+```azurecli-interactive
+PULL_SECRET=$(cat pull-secret.txt) # the pull secret text
+```
+
+### Define the following parameters as environment variables - Azure CLI
+
+```azurecli-interactive
+RESOURCEGROUP=aro-rg # the new resource group for the cluster
+LOCATION=eastus # the location of the new cluster
+DOMAIN=mydomain # the domain prefix for the cluster
+CLUSTER=aro-cluster # the name of the cluster
+```
+
+### Register the required resource providers - Azure CLI
+
+Register the following resource providers in your subscription: `Microsoft.RedHatOpenShift`, `Microsoft.Compute`, `Microsoft.Storage` and `Microsoft.Authorization`.
+
+```azurecli-interactive
+az provider register --namespace 'Microsoft.RedHatOpenShift' --wait
+az provider register --namespace 'Microsoft.Compute' --wait
+az provider register --namespace 'Microsoft.Storage' --wait
+az provider register --namespace 'Microsoft.Authorization' --wait
+```
+
+### Create the new resource group - Azure CLI
+
+```azurecli-interactive
+az group create --name $RESOURCEGROUP --location $LOCATION
+```
+
+### Create a service principal for the new Azure AD application
+- Azure CLI
+
+```azurecli-interactive
+az ad sp create-for-rbac --name "sp-$RG_NAME-${RANDOM}" --role Contributor > app-service-principal.json
+SP_CLIENT_ID=$(jq -r '.appId' app-service-principal.json)
+SP_CLIENT_SECRET=$(jq -r '.password' app-service-principal.json)
+SP_OBJECT_ID=$(az ad sp show --id $SP_CLIENT_ID | jq -r '.objectId')
+```
+
+### Assign the Contributor role to the new service principal - Azure CLI
+
+```azurecli-interactive
+az role assignment create \
+ --role 'User Access Administrator' \
+ --assignee-object-id $SP_OBJECT_ID \
+ --resource-group $RESOURCEGROUP \
+ --assignee-principal-type 'ServicePrincipal'
+
+az role assignment create \
+ --role 'Contributor' \
+ --assignee-object-id $SP_OBJECT_ID \
+ --resource-group $RESOURCEGROUP \
+ --assignee-principal-type 'ServicePrincipal'
+```
+
+### Get the service principal object ID for the OpenShift resource provider - Azure CLI
+
+```azurecli-interactive
+ARO_RP_SP_OBJECT_ID=$(az ad sp list --display-name "Azure Red Hat OpenShift RP" --query [0].objectId -o tsv)
+```
+
+### Deploy the cluster - Azure CLI
+
+```azurecli-interactive
+az deployment group create \
+ --name aroDeployment \
+ --resource-group $RESOURCEGROUP \
+ --template-file azuredeploy.json \
+ --parameters location=$LOCATION \
+ --parameters domain=$DOMAIN \
+ --parameters pullSecret=$PULL_SECRET \
+ --parameters clusterName=$ARO_CLUSTER_NAME \
+ --parameters aadClientId=$SP_CLIENT_ID \
+ --parameters aadObjectId=$SP_OBJECT_ID \
+ --parameters aadClientSecret=$SP_CLIENT_SECRET \
+ --parameters rpObjectId=$ARO_RP_SP_OBJECT_ID
+```
+
+### Connect to your cluster - Azure CLI
+
+To connect to your new cluster, review the steps in [Connect to an Azure Red Hat OpenShift 4 cluster](tutorial-connect-cluster.md).
+
+### Clean up resources - Azure CLI
+
+Once you're done, run the following command to delete your resource group and all the resources you created in this tutorial.
+
+```azurecli-interactive
+az aro delete --resource-group $RESOURCEGROUP --name $ARO_CLUSTER_NAME
+```
+
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+In this article, you learned how to create an Azure Red Hat OpenShift cluster running OpenShift 4 using both ARM templates and Bicep.
+
+Advance to the next article to learn how to configure the cluster for authentication using Azure Active Directory.
+
+* [Rotate service principal credentials for your Azure Red Hat OpenShift (ARO) Cluster](howto-service-principal-credential-rotation.md)
+
+* [Configure authentication with Azure Active Directory using the command line](configure-azure-ad-cli.md)
+
+* [Configure authentication with Azure Active Directory using the Azure portal and OpenShift web console](configure-azure-ad-cli.md)i
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Click the resource provider name in the following table to see the list of opera
| **Virtual desktop infrastructure** | | [Microsoft.DesktopVirtualization](#microsoftdesktopvirtualization) | | **Other** |
+| [Microsoft.Chaos](#microsoftchaos) |
| [Microsoft.DigitalTwins](#microsoftdigitaltwins) | | [Microsoft.ServicesHub](#microsoftserviceshub) |
Azure service: [Windows Virtual Desktop](../virtual-desktop/index.yml)
## Other
+### Microsoft.Chaos
+
+Azure service: [Azure Chaos Studio](../chaos-studio/index.yml)
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.Chaos/register/action | Registers the subscription for the Chaos Resource Provider and enables the creation of Chaos resources. |
+> | Microsoft.Chaos/unregister/action | Unregisters the subscription for the Chaos Resource Provider and enables the creation of Chaos resources. |
+> | Microsoft.Chaos/artifactSetDefinitions/write | Creates an Artifact Set Definition which describes the set of artifact to capture for a given Chaos Experiment. |
+> | Microsoft.Chaos/artifactSetDefinitions/read | Gets all Artifact Set Definitions that extend a Chaos Experiment resource. |
+> | Microsoft.Chaos/artifactSetDefinitions/delete | Deletes all Artifact Set Definitions that extend a Chaos Experiment resource. |
+> | Microsoft.Chaos/artifactSetSnapshots/read | Gets all Artifact Set Snapshots that extend a Chaos Experiment resource. |
+> | Microsoft.Chaos/artifactSetSnapshots/artifactSnapshots/read | Gets all Artifact Snapshots that extend a Artifact Set Snapshot. |
+> | Microsoft.Chaos/experiments/write | Creates or updates a Chaos Experiment resource in a resource group. |
+> | Microsoft.Chaos/experiments/delete | Deletes a Chaos Experiment resource in a resource group. |
+> | Microsoft.Chaos/experiments/read | Gets all Chaos Experiments in a resource group. |
+> | Microsoft.Chaos/experiments/start/action | Starts a Chaos Experiment to inject faults. |
+> | Microsoft.Chaos/experiments/cancel/action | Cancels a running Chaos Experiment to stop the fault injection. |
+> | Microsoft.Chaos/experiments/executionDetails/read | Gets all chaos experiment execution details for a given chaos experiment. |
+> | Microsoft.Chaos/experiments/statuses/read | Gets all chaos experiment execution statuses for a given chaos experiment. |
+> | Microsoft.Chaos/locations/targetTypes/read | Gets all TargetTypes. |
+> | Microsoft.Chaos/locations/targetTypes/capabilityTypes/read | Gets all CapabilityType. |
+> | Microsoft.Chaos/operations/read | Read the available Operations for Chaos Studio. |
+> | Microsoft.Chaos/skus/read | Read the available SKUs for Chaos Studio. |
+> | Microsoft.Chaos/targets/write | Creates or update a Target resource that extends a tracked resource. |
+> | Microsoft.Chaos/targets/delete | Deletes a Target resource that extends a tracked resource. |
+> | Microsoft.Chaos/targets/read | Gets all Targets that extend a tracked resource. |
+> | Microsoft.Chaos/targets/capabilities/write | Creates or update a Capability resource that extends a Target resource. |
+> | Microsoft.Chaos/targets/capabilities/delete | Deletes a Capability resource that extends a Target resource. |
+> | Microsoft.Chaos/targets/capabilities/read | Gets all Capabilities that extend a Target resource. |
+ ### Microsoft.DigitalTwins Azure service: [Azure Digital Twins](../digital-twins/index.yml)
search Search Howto Managed Identities Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-data-sources.md
Cognitive Search supports system-assigned managed identity in all scenarios, and
| [Azure Key Vault for customer-managed keys](search-security-manage-encryption-keys.md) | Yes | No | | [Debug sessions (hosted in Azure Storage)](cognitive-search-debug-session.md)| Yes | No | | [Enrichment cache (hosted in Azure Storage)](search-howto-incremental-index.md)| Yes <sup>1</sup>| No |
-| [Knowledge Store (hosted in Azure Storage)](knowledge-store-create-rest.md) | Yes | No |
+| [Knowledge Store (hosted in Azure Storage)](knowledge-store-create-rest.md) | Yes <sup>2</sup>| No |
| [Custom skills (hosted in Azure Functions or equivalent)](cognitive-search-custom-skill-interface.md) | Yes | No | <sup>1</sup> The Import data wizard doesn't currently accept a system managed identity connection string for incremental enrichment, but after the wizard completes, you can update the indexer JSON definition to include the connection string, and then rerun the indexer.
+<sup>2</sup> If your indexer has an attached skillset that writes back to Azure Storage (for example, it creates a knowledge store or caches enriched content), a managed identity won't work if the storage account is behind a firewall or has IP restrictions. This is a known limitation that will be lifted when managed identity support for skillset scenarios becomes generally available. The solution is to use a full access connection string instead of a managed identity if Azure Storage is behind a firewall.
+ Debug sessions, enrichment cache, and knowledge store are features that write to Blob Storage. Assign a system managed identity to the **Storage Blob Data Contributor** role to support these features. Knowledge store will also write to Table Storage. Assign a system managed identity to the **Storage Table Data Contributor** role to support table projections.
search Search Howto Managed Identities Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-storage.md
This article assumes familiarity with indexer concepts and configuration. If you
For a code example in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub. > [!NOTE]
-> If your indexer has an attached skillset that writes back to Azure Storage (for example, it creates a knowledge store or caches enriched content), a managed identity won't work if the storage account is behind a firewall or has IP restrictions. This is a known limitation that will be lifted when managed identity support for skillset scenarios becomes generally available. The solution is to use a full access connection string instead of a managed identity.
+> If your indexer has an attached skillset that writes back to Azure Storage (for example, it creates a knowledge store or caches enriched content), a managed identity won't work if the storage account is behind a firewall or has IP restrictions. This is a known limitation that will be lifted when managed identity support for skillset scenarios becomes generally available. The solution is to use a full access connection string instead of a managed identity if Azure Storage is behind a firewall.
## Prerequisites
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 12/07/2021 Last updated : 03/17/2022 # What's new in Azure Cognitive Search
Learn what's new in the service. Bookmark this page to keep up to date with serv
| May | [featuresMode relevance score expansion (preview)](index-similarity-and-scoring.md#featuresMode-param) | | |March | [Native blob soft delete (preview)](search-howto-index-changed-deleted-blobs.md) | Deletes search documents if the source blob is soft-deleted in blob storage. | |March | [Management REST API (2020-03-13)](/rest/api/searchmanagement/management-api-versions) | Generally available. |
-|February | [PII Detection skill (preview)](cognitive-search-skill-pii-detection.md) | A cognitive skill that extracts and masks personal information. |
-|February | [Custom Entity Lookup skill (preview)](cognitive-search-skill-custom-entity-lookup.md) | A cognitive skill that finds words and phrases from a list and labels all documents with matching entities. |
+|February | [PII Detection skill](cognitive-search-skill-pii-detection.md) | A cognitive skill that extracts and masks personal information. |
+|February | [Custom Entity Lookup skill](cognitive-search-skill-custom-entity-lookup.md) | A cognitive skill that finds words and phrases from a list and labels all documents with matching entities. |
|January | [Customer-managed key encryption](search-security-manage-encryption-keys.md) | Generally available |
-|January | [IP rules for in-bound firewall support (preview)](service-configure-firewall.md) | New **IpRule** and **NetworkRuleSet** properties in [CreateOrUpdate API](/rest/api/searchmanagement/2020-08-01/services/create-or-update). |
-|January | [Create a private endpoint (preview)](service-create-private-endpoint.md) | Set up a Private Link for secure connections to your search service. This preview feature has a dependency [Azure Private Link](../private-link/private-link-overview.md) and [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) as part of the solution. |
+|January | [IP rules for in-bound firewall support](service-configure-firewall.md) | New **IpRule** and **NetworkRuleSet** properties in [CreateOrUpdate API](/rest/api/searchmanagement/2020-08-01/services/create-or-update). |
+|January | [Create a private endpoint](service-create-private-endpoint.md) | Set up a Private Link for secure connections to your search service. This preview feature has a dependency [Azure Private Link](../private-link/private-link-overview.md) and [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) as part of the solution. |
## 2019 Archive
Learn what's new in the service. Bookmark this page to keep up to date with serv
|-||-| |December | [Create Demo App](search-create-app-portal.md) | A wizard that generates a downloadable HTML file with query (read-only) access to an index, intended as a validation and testing tool rather than a short cut to a full client app.| |November | [Incremental enrichment (preview)](cognitive-search-incremental-indexing-conceptual.md) | Caches skillset processing for future reuse. |
-|November | [Document Extraction skill (preview)](cognitive-search-skill-document-extraction.md) | A cognitive skill to extract the contents of a file from within a skillset.|
+|November | [Document Extraction skill](cognitive-search-skill-document-extraction.md) | A cognitive skill to extract the contents of a file from within a skillset.|
|November | [Text Translation skill](cognitive-search-skill-text-translation.md) | A cognitive skill used during indexing that evaluates and translates text. Generally available.| |November | [Power BI templates](https://github.com/Azure-Samples/cognitive-search-templates/blob/master/README.md) | Template for visualizing content in knowledge store |
-|November | [Azure Data Lake Storage Gen2 (preview)](search-howto-index-azure-data-lake-storage.md) and [Cosmos DB Gremlin API (preview)](search-howto-index-cosmosdb.md) | New indexer data sources in public preview. |
+|November | [Azure Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md) and [Cosmos DB Gremlin API (preview)](search-howto-index-cosmosdb.md) | New indexer data sources in public preview. |
|July | [Azure Government Cloud support](https://azure.microsoft.com/global-infrastructure/services/?regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&products=search) | Generally available.| <a name="new-service-name"></a>
Azure Search was renamed to **Azure Cognitive Search** in October 2019 to reflec
## Service updates
-[Service update announcements](https://azure.microsoft.com/updates/?product=search&status=all) for Azure Cognitive Search can be found on the Azure web site.
+[Service update announcements](https://azure.microsoft.com/updates/?product=search&status=all) for Azure Cognitive Search can be found on the Azure web site.
service-bus-messaging Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/private-link-service.md
If you already have an existing namespace, you can create a private endpoint by
> If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only. :::image type="content" source="./media/service-bus-ip-filtering/selected-networks.png" alt-text="Networking page with the selected networks option selected." lightbox="./media/service-bus-ip-filtering/selected-networks.png":::
- - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the Service Bus namespace accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
:::image type="content" source="./media/service-bus-ip-filtering/firewall-all-networks-selected.png" alt-text="Screenshot of the Azure portal Networking page. The option to allow access from All networks is selected on the Firewalls and virtual networks tab."::: 5. To allow access to the namespace via private endpoints, select the **Private endpoint connections** tab at the top of the page
service-bus-messaging Service Bus Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-sas.md
The following recommendations for using shared access signatures can help mitiga
- **Have clients automatically renew the SAS if necessary**: Clients should renew the SAS well before expiration, to allow time for retries if the service providing the SAS is unavailable. If your SAS is meant to be used for a small number of immediate, short-lived operations that are expected to be completed within the expiration period, then it may be unnecessary as the SAS is not expected to be renewed. However, if you have client that is routinely making requests via SAS, then the possibility of expiration comes into play. The key consideration is to balance the need for the SAS to be short-lived (as previously stated) with the need to ensure that client is requesting renewal early enough (to avoid disruption due to the SAS expiring prior to a successful renewal). - **Be careful with the SAS start time**: If you set the start time for SAS to **now**, then due to clock skew (differences in current time according to different machines), failures may be observed intermittently for the first few minutes. In general, set the start time to be at least 15 minutes in the past. Or, donΓÇÖt set it at all, which will make it valid immediately in all cases. The same generally applies to the expiry time as well. Remember that you may observer up to 15 minutes of clock skew in either direction on any request. - **Be specific with the resource to be accessed**: A security best practice is to provide user with the minimum required privileges. If a user only needs read access to a single entity, then grant them read access to that single entity, and not read/write/delete access to all entities. It also helps lessen the damage if a SAS is compromised because the SAS has less power in the hands of an attacker.-- **DonΓÇÖt always use SAS**: Sometimes the risks associated with a particular operation against your Event Hubs outweigh the benefits of SAS. For such operations, create a middle-tier service that writes to your Event Hubs after business rule validation, authentication, and auditing.
+- **DonΓÇÖt always use SAS**: Sometimes the risks associated with a particular operation against your Service Bus outweigh the benefits of SAS. For such operations, create a middle-tier service that writes to your Service Bus after business rule validation, authentication, and auditing.
- **Always use HTTPs**: Always use Https to create or distribute a SAS. If a SAS is passed over HTTP and intercepted, an attacker performing a man-in-the-middle attach is able to read the SAS and then use it just as the intended user could have, potentially compromising sensitive data or allowing for data corruption by the malicious user. ## Configuration for Shared Access Signature authentication
service-fabric Service Fabric Reliable Actors Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-lifecycle.md
When an actor is deactivated, the following occurs:
> ### Actor garbage collection
-When an actor is deactivated, references to the actor object are released and it can be garbage collected normally by the common language runtime (CLR) or java virtual machine (JVM) garbage collector. Garbage collection only cleans up the actor object; it does **not** remove state stored in the actor's State Manager. The next time the actor is activated, a new actor object is created and its state is restored.
+When an actor is deactivated, references to the actor object are released and it can be garbage collected normally by the common language runtime (CLR) or Java virtual machine (JVM) garbage collector. Garbage collection only cleans up the actor object; it does **not** remove state stored in the actor's State Manager. The next time the actor is activated, a new actor object is created and its state is restored.
What counts as ΓÇ£being usedΓÇ¥ for the purpose of deactivation and garbage collection?
spring-cloud How To Circuit Breaker Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-circuit-breaker-metrics.md
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to collect Spring Cloud Resilience4j Circuit Breaker Metrics with Application Insights java in-process agent. With this feature you can monitor metrics of resilience4j circuit breaker from Application Insights with Micrometer.
+This article shows you how to collect Spring Cloud Resilience4j Circuit Breaker Metrics with Application Insights Java in-process agent. With this feature you can monitor metrics of resilience4j circuit breaker from Application Insights with Micrometer.
We use the [spring-cloud-circuit-breaker-demo](https://github.com/spring-cloud-samples/spring-cloud-circuitbreaker-demo) to show how it works.
spring-cloud How To Launch From Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-launch-from-source.md
Azure Spring Cloud enables Spring Boot applications on Azure.
-You can launch applications directly from java source code or from a pre-built JAR. This article explains the deployment procedures.
+You can launch applications directly from Java source code or from a pre-built JAR. This article explains the deployment procedures.
This quickstart explains how to:
spring-cloud How To Use Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-tls-certificate.md
X509Certificate cert = (X509Certificate) factory.generateCertificate(is);
### Load a certificate into the trust store
-For a java application, you can choose **Load into trust store** for the selected certificate. The certificate will be automatically added to the Java default TrustStores to authenticate a server in SSL authentication.
+For a Java application, you can choose **Load into trust store** for the selected certificate. The certificate will be automatically added to the Java default TrustStores to authenticate a server in SSL authentication.
The following log from your app shows that the certificate is successfully loaded.
spring-cloud Quickstart Setup Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-setup-config-server.md
The following procedure sets up the Config Server using the Azure portal to depl
1. Go to the service **Overview** page and select **Config Server**.
-2. In the **Default repository** section, set **URI** to "https://github.com/azure-samples/spring-petclinic-microservices-config".
+2. In the **Default repository** section, set **URI** to `https://github.com/azure-samples/spring-petclinic-microservices-config`.
3. Select **Validate**.
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md
Later on, when you create your migration jobs, you can use this list to identify
> Selecting more than **50** StorSimple volume backups is not supported. > Your migration jobs can only move backups, never data from the live volume. Therefore the most recent backup is closest to the live data and thus should always be part of the list of backups to be moved in a migration.
+> [!CAUTION]
+> It's best to suspend all StorSimple backup retention policies before you select a backup for migration. </br>Migrating your backups takes several days or weeks. StorSimple offers backup retention policies that will delete backups. Backups you have selected for this migration may get deleted before they had a chance to be migrated.
+ ### Map your existing StorSimple volumes to Azure file shares [!INCLUDE [storage-files-migration-namespace-mapping](../../../includes/storage-files-migration-namespace-mapping.md)]
This section describes how to set up a migration job and carefully map the direc
:::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-new-job.png" alt-text="Screenshot of the new job creation form for a migration job."::: :::column-end::: :::column:::
- **Job definition name**</br>This name should indicate the set of files you're moving. Giving it a similar name as your Azure file share is a good practice. </br></br>**Location where the job runs**</br>When selecting a region, you must select the same region as your StorSimple storage account or, if that isn't available, then a region close to it. </br></br><h3>Source</h3>**Source subscription**</br>Select the subscription in which you store your StorSimple Device Manager resource. </br></br>**StorSimple resource**</br>Select your StorSimple Device Manager your appliance is registered with. </br></br>**Service data encryption key**</br>Check this [prior section in this article](#storsimple-service-data-encryption-key) in case you can't locate the key in your records. </br></br>**Device**</br>Select your StorSimple device that holds the volume where you want to migrate. </br></br>**Volume**</br>Select the source volume. Later you'll decide if you want to migrate the whole volume or subdirectories into the target Azure file share.</br></br> **Volume backups**</br>You can select *Select volume backups* to choose specific backups to move as part of this job. An upcoming, [dedicated section in this article](#selecting-volume-backups-to-migrate) covers the process in detail.</br></br><h3>Target</h3>Select the subscription, storage account, and Azure file share as the target of this migration job.</br></br><h3>Directory mapping</h3>[A dedicated section in this article](#directory-mapping), discusses all relevant details.
+ **Job definition name**</br>This name should indicate the set of files you're moving. Giving it a similar name as your Azure file share is a good practice. </br></br>**Location where the job runs**</br>When selecting a region, you must select the same region as your StorSimple storage account or, if that isn't available, then a region close to it. </br></br><h3>Source</h3>**Source subscription**</br>Select the subscription in which you store your StorSimple Device Manager resource. </br></br>**StorSimple resource**</br>Select your StorSimple Device Manager your appliance is registered with. </br></br>**Service data encryption key**</br>Check this [prior section in this article](#storsimple-service-data-encryption-key) in case you can't locate the key in your records. </br></br>**Device**</br>Select your StorSimple device that holds the volume where you want to migrate. </br></br>**Volume**</br>Select the source volume. Later you'll decide if you want to migrate the whole volume or subdirectories into the target Azure file share. </br></br> **Volume backups**</br>You can select *Select volume backups* to choose specific backups to move as part of this job. An upcoming, [dedicated section in this article](#selecting-volume-backups-to-migrate) covers the process in detail.</br></br><h3>Target</h3>Select the subscription, storage account, and Azure file share as the target of this migration job.</br></br><h3>Directory mapping</h3>[A dedicated section in this article](#directory-mapping), discusses all relevant details.
:::column-end::: :::row-end:::
There are important aspects around choosing backups that need to be migrated:
:::row-end::: > [!CAUTION]
-> Selecting more than 50 StorSimple volume backups is not supported. Jobs with a large number of backups may fail.
+> Selecting more than 50 StorSimple volume backups is not supported. Jobs with a large number of backups may fail. Make sure your backup retention policies don't delete a selected backup before it got a chance to be migrated!
### Directory mapping
In the job blade that opens, you can see your job's current status and a list of
:::image type="content" source="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-never-ran-focused.png" alt-text="Screenshot of the migration job blade with a highlight around the command to start the job. It also displays the selected backups scheduled for migration." lightbox="media/storage-files-migration-storsimple-8000/storage-files-migration-storsimple-8000-job-never-ran.png"::: :::column-end::: :::column:::
- Initially, the migration job will have the status: **Never ran**. </br>When you are ready, you can start this migration job. (Select the image for a version with higher resolution.) </br> When a backup was successfully migrated, an automatic Azure file share snapshot will be taken. The original backup date of your StorSimple backup will be placed in the *Comments* section of the Azure file share snapshot. Utilizing this field will allow you to see when the data was originally backed up as compared to the time the file share snapshot was taken.
- </br></br>
- > [!CAUTION]
- > Backups must be processed from oldest to newest. Once a migration job is created, you can't change the list of selected StorSimple volume backups. Don't start the job if the list of Backups is incorrect or incomplete. Delete the job and make a new one with the correct backups selected.
+ Initially, the migration job will have the status: **Never ran**. </br>When you are ready, you can start this migration job. (Select the image for a version with higher resolution.) </br> When a backup was successfully migrated, an automatic Azure file share snapshot will be taken. The original backup date of your StorSimple backup will be placed in the *Comments* section of the Azure file share snapshot. Utilizing this field will allow you to see when the data was originally backed up as compared to the time the file share snapshot was taken.
:::column-end::: :::row-end:::
+> [!CAUTION]
+> Backups must be processed from oldest to newest. Once a migration job is created, you can't change the list of selected StorSimple volume backups. Don't start the job if the list of Backups is incorrect or incomplete. Delete the job and make a new one with the correct backups selected. For each selected backup, check your retention schedules. Backups may get deleted by one or more of your retention policies before they got a chance to be migrated!
+ ### Per-item errors The migration jobs have two columns in the list of backups that list any issues that may have occurred during the copy:
storsimple Storsimple Data Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-data-manager-overview.md
NA Previously updated : 08/17/2021 Last updated : 03/18/2022
There are also limitations on what can be stored in Azure file shares. It's impo
- Maximum supported file size for a blob is 4.7 TiB. - Most recent available backup set will be used. - File metadata is not uploaded with the file content.
- - Uploaded blobs are of the Block Blob type. Thus, any uploaded VHD can't be used in Azure Virtual Machines.
+ - Uploaded blobs are of the Block Blob type. Thus, any uploaded VHD or VHDX can't be used in Azure Virtual Machines.
## Next steps
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md
Title: Multimedia redirection on Azure Virtual Desktop - Azure
description: How to use multimedia redirection for Azure Virtual Desktop (preview). Previously updated : 03/16/2022 Last updated : 03/18/2022
The following issues are ones we're already aware of, so you won't need to repor
- When you resize the video window, the window's size will adjust faster than the video itself. You'll also see this issue when minimizing and maximizing the window.
+- When the display scale factor of the screen isn't at 100% and you've set the video window to a certain size, you might see a gray patch on the screen. In most cases, you can get rid of the gray patch by resizing the window.
+ ## Next steps If you're interested in video streaming on other parts of Azure Virtual Desktop, check out [Teams for Azure Virtual Desktop](teams-on-avd.md).
virtual-machines Compute Benchmark Scores https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/compute-benchmark-scores.md
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_F48s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 48 | 1 | 96.0 | 742,828 | 17,316 | 2.33% | 112 | | Standard_F64s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz | 64 | 2 | 128.0 | 1,030,552 | 8,106 | 0.79% | 70 | | Standard_F64s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 64 | 2 | 128.0 | 1,028,052 | 9,373 | 0.91% | 168 |
-| Standard_F72s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz | 72 | 2 | 144.0 | 561,588 | 8,677 | 1.55% | 84 |
-| Standard_F72s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 72 | 2 | 144.0 | 561,997 | 9,731 | 1.73% | 98 |
## General purpose ### DSv3 - General Compute + Premium Storage
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_D32as_v4 | AMD EPYC 7452 32-Core Processor | 32 | 4 | 128.0 | 566,270 | 8,484 | 1.50% | 140 | | Standard_D48as_v4 | AMD EPYC 7452 32-Core Processor | 48 | 6 | 192.0 | 829,547 | 15,679 | 1.89% | 126 | | Standard_D64as_v4 | AMD EPYC 7452 32-Core Processor | 64 | 8 | 256.0 | 1,088,030 | 16,708 | 1.54% | 28 |
-| Standard_D96as_v4 | AMD EPYC 7452 32-Core Processor | 96 | 12 | 384.0 | 751,849 | 6,836 | 0.91% | 14 |
+ ### Dav4 (03/25/2021 PBIID:9198755)
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_D32a_v4 | AMD EPYC 7452 32-Core Processor | 32 | 4 | 128.0 | 567,019 | 11,019 | 1.94% | 210 | | Standard_D48a_v4 | AMD EPYC 7452 32-Core Processor | 48 | 6 | 192.0 | 835,617 | 13,097 | 1.57% | 140 | | Standard_D64a_v4 | AMD EPYC 7452 32-Core Processor | 64 | 8 | 256.0 | 1,099,165 | 21,962 | 2.00% | 252 |
-| Standard_D96a_v4 | AMD EPYC 7452 32-Core Processor | 96 | 12 | 384.0 | 749,340 | 8,728 | 1.16% | 126 |
+ ### DDSv4 (03/26/2021 PBIID:9198755)
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_E64as_v4 | AMD EPYC 7452 32-Core Processor | 64 | 8 | 512.0 | 1,097,588 | 26,100 | 2.38% | 280 | | Standard_E64-16as_v4 | AMD EPYC 7452 32-Core Processor | 16 | 8 | 512.0 | 284,934 | 5,065 | 1.78% | 154 | | Standard_E64-32as_v4 | AMD EPYC 7452 32-Core Processor | 32 | 8 | 512.0 | 561,951 | 9,691 | 1.72% | 140 |
-| Standard_E96as_v4 | AMD EPYC 7452 32-Core Processor | 96 | 12 | 672.0 | 750,338 | 9,645 | 1.29% | 182 |
| Standard_E96-24as_v4 | AMD EPYC 7452 32-Core Processor | 24 | 11 | 672.0 | 423,442 | 8,504 | 2.01% | 182 | | Standard_E96-48as_v4 | AMD EPYC 7452 32-Core Processor | 48 | 11 | 672.0 | 839,993 | 14,218 | 1.69% | 70 |
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_E32a_v4 | AMD EPYC 7452 32-Core Processor | 32 | 4 | 256.0 | 565,363 | 10,941 | 1.94% | 126 | | Standard_E48a_v4 | AMD EPYC 7452 32-Core Processor | 48 | 6 | 384.0 | 837,493 | 15,803 | 1.89% | 126 | | Standard_E64a_v4 | AMD EPYC 7452 32-Core Processor | 64 | 8 | 512.0 | 1,097,111 | 30,290 | 2.76% | 336 |
-| Standard_E96a_v4 | AMD EPYC 7452 32-Core Processor | 96 | 12 | 672.0 | 749,908 | 8,646 | 1.15% | 196 |
### EDSv4 (03/27/2021 PBIID:9198755)
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_E64-16ds_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 16 | 2 | 504.0 | 260,677 | 3,340 | 1.28% | 154 | | Standard_E64-32ds_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 32 | 2 | 504.0 | 514,504 | 4,082 | 0.79% | 98 |
-### Edsv4 Isolated Extended
-(04/05/2021 PBIID:9198755)
-
-| VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs |
-| | | : | : | : | : | : | : | : |
-| Standard_E80ids_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 80 | 2 | 504.0 | 622,608 | 10,276 | 1.65% | 336 |
- ### EDv4 (03/26/2021 PBIID:9198755)
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_E48d_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 48 | 2 | 384.0 | 761,410 | 21,640 | 2.84% | 336 | | Standard_E64d_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 64 | 2 | 504.0 | 1,030,708 | 9,500 | 0.92% | 322 |
-### EIASv4
-(04/05/2021 PBIID:9198755)
-
-| VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs |
-| | | : | : | : | : | : | : | : |
-| Standard_E96ias_v4 | AMD EPYC 7452 32-Core Processor | 96 | 12 | 672.0 | 749,545 | 8,690 | 1.16% | 28 |
- ### Esv4 (03/25/2021 PBIID:9198755)
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_E64-16s_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 16 | 2 | 504.0 | 224,499 | 3,955 | 1.76% | 168 | | Standard_E64-32s_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 32 | 2 | 504.0 | 441,521 | 30,939 | 7.01% | 168 |
-### Esv4 Isolated Extended
-(04/05/2021 PBIID:9198755)
-
-| VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs |
-| | | : | : | : | : | : | : | : |
-| Standard_E80is_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 80 | 2 | 504.0 | 623,225 | 9,806 | 1.57% | 322 |
- ### Ev4 (03/25/2021 PBIID:9198755)
The following CoreMark benchmark scores show compute performance for select Azur
Windows numbers were computed by running CoreMark on Windows Server 2019. CoreMark was configured with the number of threads set to the number of virtual CPUs, and concurrency set to `PThreads`. The target number of iterations was adjusted based on expected performance to provide a runtime of at least 20 seconds (typically much longer). The final score represents the number of iterations completed divided by the number of seconds it took to run the test. Each test was run at least seven times on each VM. Test run dates shown above. Tests run on multiple VMs across Azure public regions the VM was supported in on the date run. + ### Running Coremark on Azure VMs **Download:**
virtual-machines Exchange Online Integration Sap Email Outbound https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/exchange-online-integration-sap-email-outbound.md
+
+ Title: Exchange Online Integration for Email-Outbound from SAP NetWeaver | Microsoft Docs
+description: Learn about Exchange Online integration for email outbound from SAP NetWeaver.
+++++ Last updated : 03/11/2022+++
+# Exchange Online Integration for Email-Outbound from SAP NetWeaver
+
+Sending emails from your SAP backend is a standard feature widely distributed for use cases such as alerting for batch jobs, SAP workflow state changes or invoice distribution. Many customers established the setup using [Exchange Server On-Premises](/exchange/exchange-server). With a shift to [Microsoft 365](https://www.microsoft.com/microsoft-365) and [Exchange Online](/exchange/exchange-online) comes a set of cloud-native approaches impacting that setup.
+
+This article describes the setup for **outbound** email-communication from NetWeaver-based SAP systems to Exchange Online.
+
+## Overview
+
+Existing implementations relied on SMTP Auth and elevated trust relationship because the legacy Exchange Server on-premises could live close to the SAP system itself and was governed by customers themselves. With Exchange Online there's a shift in responsibilities and connectivity paradigm. Microsoft supplies Exchange Online as a Software-as-a-Service offering built to be consumed securely and as effectively as possible from anywhere in the world over the public Internet.
+
+Follow our standard [guide](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365) to understand the general configuration of a "device" that wants to send email via Microsoft 365.
+
+> [!IMPORTANT]
+> Microsoft disabled Basic Authentication for Exchange online as of 2020 for newly created Microsoft 365 tenants. In addition to that, the feature gets disabled for existing tenants with no prior usage of Basic Authentication starting October 2020. See our developer [blog](https://devblogs.microsoft.com/microsoft365dev/deferred-end-of-support-date-for-basic-authentication-in-exchange-online/) for reference.
+
+> [!IMPORTANT]
+> SMTP Auth was exempted from the Basic Auth feature sunset
+process. However, this is a security risk for your estate, so we advise
+against it. See the latest [post](https://techcommunity.microsoft.com/t5/exchange-team-blog/basic-authentication-and-exchange-online-september-2021-update/ba-p/2772210)
+by our Exchange Team on the matter.
+
+> [!IMPORTANT]
+> Current OAuth support for SMTP is described on our [Exchange Server documentation for legacy protocols](/exchange/client-developer/legacy-protocols/how-to-authenticate-an-imap-pop-smtp-application-by-using-oauth).
+
+## Setup considerations
+
+Given the sunset-exception of SMTP Auth there are four different options supported by SAP NetWeaver that we want to describe. The first three correlate with the scenarios described in the [Exchange Online documentation](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365).
+
+1. SMTP Authentication Client Submission
+2. SMTP Direct Send
+3. Using Exchange Online SMTP relay connector
+4. Using SMTP relay server as intermediary to Exchange Online
+
+For brevity we'll refer to the [**SAP Connect administration tool**](https://wiki.scn.sap.com/wiki/display/SI/SCOT+-+SAPconnect+Administration) used for the mail server setup only by its transaction code SCOT.
+
+## Option 1: SMTP Authentication Client Submission
+
+Choose this option when you want to send mail to **people inside and outside** your organization.
+
+Connect SAP applications directly to Microsoft 365 using SMTP Auth endpoint **smtp.office365.com** in SCOT.
+
+A valid email address will be required to authenticate with Microsoft 365. The email address of the account that's used to authenticate with Microsoft 365 will appear as the sender of messages from the SAP application.
+
+### Requirements for SMTP AUTH
+
+- **SMTP AUTH**: Needs to be enabled for the mailbox being used. SMTP AUTH is disabled for organizations created after January 2020 but can be enabled per-mailbox. For more information, see [Enable or disable authenticated client SMTP submission (SMTP AUTH) in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/authenticated-client-smtp-submission).
+- **Authentication**: Use Basic Authentication (which is simply a username and password) to send email from SAP application. If SMTP AUTH is intentionally disabled for the organization, you must use Option 2, 3 or 4 below.
+- **Mailbox**: You must have a licensed Microsoft 365 mailbox to send email from.
+- **Transport Layer Security (TLS)**: Your SAP Application must be able to use TLS version 1.2 and above.
+- **Port**: Port 587 (recommended) or port 25 is required and must be unblocked on your network. Some network firewalls or Internet Service Providers block ports, especially port 25, because that\'s the port that email servers use to send mail.
+- **DNS**: Use the DNS name smtp.office365.com. Don't use an IP address for the Microsoft 365 server, as IP Addresses aren't supported.
+
+### How to Enable SMTP auth for mailboxes in Exchange Online
+
+There are two ways to enable SMTP AUTH in Exchange online:
+
+1. For a single account (per mailbox) that overrides the tenant-wide setting or
+2. at organization level.
+
+> [!NOTE]
+> if your authentication policy disables basic authentication for SMTP, clients cannot use the SMTP AUTH protocol even if you enable the settings outlined in this article.
+
+The per-mailbox setting to enable SMTP AUTH is available in the [Microsoft 365 Admin Center](https://admin.microsoft.com/) or via [Exchange Online PowerShell](/powershell/exchange/connect-to-exchange-online-powershell).
+
+1. Open the [Microsoft 365 admin center](https://admin.microsoft.com/) and go to **Users** -> **Active users**.
+
+ :::image type="content" source="media/exchange-online-integration/admin-center-active-user-sec-1-1.png" alt-text="Admin Center - Active Users":::
+
+2. Select the user, follow the wizard, click **Mail**.
+
+3. In the **Email apps** section, click **Manage email apps**.
+
+ :::image type="content" source="media/exchange-online-integration/admin-center-sec-1-3.png" alt-text="Admin Center - Manage email":::
+
+4. Verify the **Authenticated SMTP** setting (unchecked = disabled, checked = enabled)
+
+ :::image type="content" source="media/exchange-online-integration/admin-center-sec-1-4.png" alt-text="Admin Center - SMTP setting":::
+
+5. **Save changes**.
+
+This will enable SMTP AUTH for that individual user in Exchange Online that you require for SCOT.
+
+### Configure SMTP Auth with SCOT
+
+1. Ping or telnet **smtp.office365.com** on port **587** from your SAP application server to make sure ports are open and accessible.
+
+ :::image type="content" source="media/exchange-online-integration/telnet-scot-sec-1-1.png" alt-text="Screenshot of ping":::
+
+2. Make sure SAP Internet Communication Manager (ICM) parameter is set in your instance profile. See below an example:
+
+ | parameter | value |
+ |||
+ | icm/server-port-1 | PROT=SMTP,PORT=25000,TIMEOUT=180,TLS=1 |
+
+3. Restart ICM service from SMICM transaction and make sure SMTP service is active.
+
+ :::image type="content" source="media/exchange-online-integration/scot-smicm-sec-1-3.png" alt-text="Screenshot of ICM setting":::
+
+4. Activate SAPConnect service in SICF transaction.
+
+ :::image type="content" source="media/exchange-online-integration/scot-smtp-sec-1-4.png" alt-text="SAP Connect setting in SICF":::
+
+5. Go to SCOT and select SMTP node (double click) as shown below to proceed with configuration:
+
+ :::image type="content" source="media/exchange-online-integration/scot-smtp-sec-1-5.png" alt-text="SMTP config":::
+
+ Add mail host **smtp.office365.com** with port **587**. Check the [Exchange Online docs](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365#how-to-set-up-smtp-auth-client-submission) for reference.
+
+ :::image type="content" source="media/exchange-online-integration/scot-smtp-sec-1-5-1.png" alt-text="SMTP config continued":::
+
+ Click on the "Settings" button (next to the Security field) to add TLS settings and basic authentication details as mentioned in point 2 if required. Make sure your ICM parameter is set accordingly.
+
+ Make sure to use a valid Microsoft 365 email ID and password. In addition to that it needs to be the same user that you've enabled for SMTP Auth at the beginning. This email ID will show up as the sender.
+
+ :::image type="content" source="media/exchange-online-integration/scot-smtp-security-serttings-sec-1-5.png" alt-text="SMTP security config":::
+
+ Coming back to the previous screen: Click on "Set" button and check "Internet" under "Supported Address Types". Using the wildcard "\*" option will allow to send emails to all domains without restriction.
+
+ :::image type="content" source="media/exchange-online-integration/scot-smtp-address-type-sec-1-5.png" alt-text="SMTP address type":::
+
+ :::image type="content" source="media/exchange-online-integration/scot-smtp-address-areas-sec-1-5.png" alt-text="SMTP address area":::
+
+ Next Step: set default Domain in SCOT.
+
+ :::image type="content" source="media/exchange-online-integration/scot-default-domain-sec-1-5.png" alt-text="SMTP default domain":::
+
+ :::image type="content" source="media/exchange-online-integration/scot-default-domain-address-sec-1-5.png" alt-text="SMTP default address":::
+
+6. Schedule Job to send email to the submission queue. From SCOT select "Send Job":
+
+ :::image type="content" source="media/exchange-online-integration/scot-send-job-sec-1-6.png" alt-text="SMTP schedule job to send":::
+
+ Provide a Job name and variant if appropriate.
+
+ :::image type="content" source="media/exchange-online-integration/scot-send-job-varient-sec-1-6.png" alt-text="SMTP schedule job variant":::
+
+ Test mail submission using transaction code SBWP and check the status using SOST transaction.
+
+### Limitations of SMTP AUTH client submission
+
+- SCOT stores login credentials for only one user, so only one Microsoft 365 mailbox can be configured this way. Sending mail via individual SAP users requires to implement the "[Send As permission](/exchange/recipients-in-exchange-online/manage-permissions-for-recipients)" offered by Microsoft 365.
+- Microsoft 365 imposes some sending limits. See [Exchange Online limits - Receiving and sending limits](/office365/servicedescriptions/exchange-online-service-description/exchange-online-limits#receiving-and-sending-limits) for more information.
+
+## Option 2: SMTP Direct Send
+
+Microsoft 365 offers the ability to configure [direct send](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365#option-2-send-mail-directly-from-your-printer-or-application-to-microsoft-365-or-office-365-direct-send) from the SAP application server. This option is limited in that it only permits mail to be routed to addresses in your own Microsoft 365 organization with a valid e-mail address therefore cannot be used for external recipients (e.g., vendors or customers).
+
+## Option 3: Using Microsoft 365 SMTP Relay Connector
+
+Only choose this option when:
+
+- Your Microsoft 365 environment has SMTP AUTH disabled.
+- SMTP client submission (Option 1) isn't compatible with your business needs or with your SAP Application.
+- You can't use direct send (Option 2) because you must send email to external recipients.
+
+SMTP relay lets Microsoft 365 relay emails on your behalf by using a connector that's configured with your public IP address or a TLS certificate. Compared to the other options, the connector setup increases complexity.
+
+### Requirements for SMTP Relay
+
+- **SAP Parameter**: SAP instance parameter configured and SMTP service are activated as explained in option 1, follow steps 2 to 4 from "Configure SMTP Auth with SCOT" section.
+- **Email Address**: Any email address in one of your Microsoft 365 verified domains. This email address doesn't need a mailbox. For example, `noreply@*yourdomain*.com`.
+- **Transport Layer Security (TLS)**: SAP application must be able to use TLS version 1.2 and above.
+- **Port**: port 25 is required and must be unblocked on your network. Some network firewalls or ISPs block ports, especially port 25 due to the risk of misuse for spamming.
+- **MX record**: your Mail Exchanger (MX) endpoint, for e.g., yourdomain.mail.protection.outlook.com. Find more information on the next section.
+- **Relay Access**: A Public IP address or SSL certificate is required to authenticate against the relay connector. To avoid configuring direct access it's recommended to use Source Network Translation (SNAT) as described in this article. [Use Source Network Address Translation (SNAT) for outbound connections](/azure/load-balancer/load-balancer-outbound-connections).
+
+### Step-by-step configuration instructions for SMTP relay in Microsoft 365
+
+1. Obtain the public (static) IP address of the endpoint which will be sending the mail using one of the methods listed in the [article](/azure/load-balancer/load-balancer-outbound-connections) above. A dynamic IP address isn\'t supported or allowed. You can share your static IP address with other devices and users, but don't share the IP address with anyone outside of your company. Make a note of this IP address for later.
+
+ :::image type="content" source="media/exchange-online-integration/azure-portal-pip-sec-3-1.png" alt-text="Where to retrieve the public ip on the Azure Portal":::
+
+> [!NOTE]
+> Find above information on the Azure portal using the Virtual Machine overview of the SAP application server.
+
+2. Sign in to the [Microsoft 365 Admin Center](https://admin.microsoft.com/).
+
+ :::image type="content" source="media/exchange-online-integration/m365-admin-center-sec-3-2.png" alt-text="Microsoft 365 AC sign in":::
+
+3. Go to **Settings** -> **Domains**, select your domain (for example, contoso.com), and find the Mail Exchanger (MX) record.
+
+ :::image type="content" source="media/exchange-online-integration/m365-admin-center-domains-sec-3-3.png" alt-text="Where to retrieve the domain mx record":::
+
+ The Mail Exchanger (MX) record will have data for **Points to address or value** that looks similar to `yourdomain.mail.protection.outlook.com`.
+
+4. Make a note of the data of **Points to address or value** for the Mail Exchanger (MX) record, which we refer to as your MX endpoint.
+
+5. In Microsoft 365, select **Admin** and then **Exchange** to go to the new Exchange Admin Center.
+
+ :::image type="content" source="media/exchange-online-integration/m365-admin-center-exchange-sec-3-5.png" alt-text="Microsoft 365 Admin Center":::
+
+6. New Exchange Admin Center (EAC) portal will open.
+
+ :::image type="content" source="media/exchange-online-integration/exchange-admin-center-sec-3-6.png" alt-text="Microsoft 365 Admin Center mailbox":::
+
+7. In the Exchange Admin Center (EAC), go to **Mail flow** -> **Connectors**. The **Connectors** screen is depicted below. If you are working with the classical EAC follow step 8 as described on our [docs](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365#step-by-step-configuration-instructions-for-smtp-relay).
+
+ :::image type="content" source="media/exchange-online-integration/exchange-admin-center-add-connector-sec-3-7.png" alt-text="Microsoft 365 Admin Center connector":::
+
+8. Click **Add a connector**
+
+ :::image type="content" source="media/exchange-online-integration/exchange-relay-connector-add-sec-3-8.png" alt-text="Microsoft 365 Admin Center connector add":::
+
+ Choose "Your organization's email server".
+
+ :::image type="content" source="media/exchange-online-integration/new-connector-sec-3-8.png" alt-text="Microsoft 365 Admin Center mail server":::
+
+9. Click **Next**. The **Connector name** screen appears.
+
+ :::image type="content" source="media/exchange-online-integration/connector-name-section-3-9.png" alt-text="Microsoft 365 Admin Center connector name":::
+
+10. Provide a name for the connector and click **Next**. The **Authenticating sent email** screen appears.
+
+ Choose *By verifying that the IP address of the sending server matches one of these IP addresses which belong exclusively to your organization* and add the IP address from Step 1 of the **Step-by-step configuration instructions for SMTP relay in Microsoft 365** section.
+
+ :::image type="content" source="media/exchange-online-integration/connector-authenticate-ip-add-section-3-10-1.png" alt-text="Microsoft 365 Admin Center verify IP":::
+
+ Review and click on **Create connector**.
+
+ :::image type="content" source="media/exchange-online-integration/review-connector-section-3-10-2.png" alt-text="Microsoft 365 Admin Center review":::
+
+ :::image type="content" source="media/exchange-online-integration/connector-created-sec-3-10-3.png" alt-text="Microsoft 365 Admin Center review security settings":::
+
+11. Now that you're done with configuring your Microsoft 365 settings, go to your domain registrar's website to update your DNS records. Edit your **Sender Policy Framework** (SPF) record. Include the IP address that you noted in step 1. The finished string should look similar to this `v=spf1 ip4:10.5.3.2 include:spf.protection.outlook.com \~all`, where 10.5.3.2 is your public IP address. Skipping this step may cause emails to be flagged as spam and end up in the recipient's Junk Email folder.
+
+### Steps in SAP Application server
+
+1. Make sure SAP ICM Parameter and SMTP service is activated as explained in Option 1 (steps 2-4)
+2. Go to SCOT transaction in SMTP node as shown in previous steps of Option 1.
+3. Add mail Host as Mail Exchanger (MX) record value noted in Step 4 (i.e. yourdomain.mail.protection.outlook.com).
++
+Mail host: yourdomain.mail.protection.outlook.com
+
+Port: 25
+
+4. Click "Settings" next to the Security field and make sure TLS is enabled if possible. Also make sure no prior logon data regarding SMTP AUTH is present. Otherwise delete existing records with the corresponding button underneath.
+
+ :::image type="content" source="media/exchange-online-integration/scot-smtp-connection-relay-tls-sec-3-4.png" alt-text="SMTP security config in SCOT":::
+
+5. Test the configuration using a test email from your SAP application with transaction SBWP and check the status in SOST transaction.
+
+## Option 4: Using SMTP relay server as intermediary to Exchange Online
+
+An intermediate relay server can be an alternative to a direct connection from the SAP application server to Microsoft 365. This server can be based on any mail server that will allow direct authentication and relay services.
+
+The advantage of this solution is that it can be deployed in the hub of a hub-spoke virtual network within your Azure environment or within a DMZ to protect your SAP application hosts from direct access. It also allows for centralized outbound routing to immediately offload all mail traffic to a central relay when sending from multiple application servers.
+
+The configuration steps are the same as for the Microsoft 365 SMTP Relay Connector (Option 3) with the only differences being that the SCOT configuration should reference the mail host that will perform the relay rather than direct to Microsoft 365. Depending on the mail system that is being used for the relay it will also be configured directly to connect to Microsoft 365 using one of the supported methods and a valid user with password. It is recommended to send a test mail from the relay directly to ensure it can communicate successfully with Microsoft 365 before completing the SAP SCOT configuration and testing as normal.
++
+The example architecture shown illustrates multiple SAP application servers with a single mail relay host in the hub. Depending on the volume of mail to be sent it is recommended to follow a detailed sizing guide for the mail vendor to be used as the relay. This may require multiple mail relay hosts which operate with an Azure Load Balancer.
+
+## Next Steps
+
+[Understand mass-mailing with Azure Twilio - SendGrid](https://docs.sendgrid.com/for-developers/partners/microsoft-azure-2021)
+
+[Understand Exchange Online Service limitations (e.g., attachment size, message limits, throttling etc.)](/office365/servicedescriptions/exchange-online-service-description/exchange-online-limits)
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
A NAT gateway can be created in a specific availability zone. Redundancy is buil
Virtual Network NAT is scaled out from creation. There isn't a ramp up or scale-out operation required. Azure manages the operation of Virtual Network NAT for you. A NAT gateway always has multiple fault domains and can sustain multiple failures without service outage.
-* Outbound connectivity can be defined for each subnet with a NAT gateway. Multiple subnets within the same virtual network can have different NAT gateways associated. Multiple subnets within the same virtual network can use the same NAT gateway. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by the NAT gateway without any customer configuration. A NAT gateway takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet.
+* Outbound connectivity can be defined for each subnet with a NAT gateway. Multiple subnets within the same virtual network can have different NAT gateways associated. Multiple subnets within the same virtual network can use the same NAT gateway. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by the NAT gateway without any customer configuration. A NAT gateway takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet
-* Presence of custom UDRs for virtual appliances and ExpressRoute override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-udrs-and-vpn-expressroute-override-nat-gateway-for-routing-outbound-traffic) to learn more.
+* Presence of custom UDRs for virtual appliances and ExpressRoute override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-udrs-and-vpn-expressroute-override-nat-gateway-for-routing-outbound-traffic) to learn more
-* Virtual Network NAT supports TCP and UDP protocols only. ICMP isn't supported.
+* Virtual Network NAT supports TCP and UDP protocols only. ICMP isn't supported
* A NAT gateway resource can use a:
Virtual Network NAT is scaled out from creation. There isn't a ramp up or scale-
* Public IP prefix
-* Virtual Network NAT is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. The NAT gateway will groom all traffic to the range of IP addresses of the prefix. Basic resources, such as basic load balancer or basic public IPs aren't compatible with Virtual Network NAT. Basic resources must be placed on a subnet not associated to a NAT gateway. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway.
+* Virtual Network NAT is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. The NAT gateway will groom all traffic to the range of IP addresses of the prefix. Basic resources, such as basic load balancer or basic public IPs aren't compatible with Virtual Network NAT. Basic resources must be placed on a subnet not associated to a NAT gateway. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway
-* To upgrade a basic load balancer too standard, see [Upgrade a public Basic Azure Load Balancer](../../load-balancer/upgrade-basic-standard.md)
+* To upgrade a basic load balancer too standard, see [Upgrade a public basic Azure Load Balancer](../../load-balancer/upgrade-basic-standard.md)
* To upgrade a basic public IP too standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
-* Virtual Network NAT is the recommended method for outbound connectivity. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md).
+* Virtual Network NAT is the recommended method for outbound connectivity. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md)
* To migrate outbound access to a NAT gateway from default outbound access or load balancer outbound rules, see [Migrate outbound access to Azure Virtual Network NAT](./tutorial-migrate-outbound-nat.md)
-* A NAT gateway canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet.
+* A NAT gateway canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet
-* A NAT gateway allows flows to be created from the virtual network to the services outside your virtual network. Return traffic from the Internet is only allowed in response to an active flow. Services outside your virtual network canΓÇÖt initiate an inbound connection through NAT gateway.
+* A NAT gateway allows flows to be created from the virtual network to the services outside your virtual network. Return traffic from the Internet is only allowed in response to an active flow. Services outside your virtual network canΓÇÖt initiate an inbound connection through NAT gateway
-* A NAT gateway canΓÇÖt span multiple virtual networks.
+* A NAT gateway canΓÇÖt span multiple virtual networks
-* Multiple NAT gateways canΓÇÖt be attached to a single subnet.
+* Multiple NAT gateways canΓÇÖt be attached to a single subnet
* A NAT gateway canΓÇÖt be deployed in a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub)
-* The private side of a NAT gateway, virtual machine instances or other compute resources, sends TCP reset packets for attempts to communicate on a TCP connection that doesn't exist. An example is connections that have reached idle timeout. The next packet received will return a TCP reset to the private IP address to signal and force connection closure. The public side of a NAT gateway doesn't generate TCP reset packets or any other traffic. Only traffic produced by the customer's virtual network is emitted.
+* Virtual machine instances or other compute resources, send TCP reset packets or attempt to communicate on a TCP connection that doesn't exist. An example is connections that have reached idle timeout. The next packet received will return a TCP reset to the private IP address to signal and force connection closure. The public side of a NAT gateway doesn't generate TCP reset packets or any other traffic. Only traffic produced by the customer's virtual network is emitted
-* A default TCP idle timeout of 4 minutes is used and can be increased to up to 120 minutes. Any activity on a flow can also reset the idle timer, including TCP keepalives.
+* A default TCP idle timeout of 4 minutes is used and can be increased to up to 120 minutes. Any activity on a flow can also reset the idle timer, including TCP keepalives
## Pricing and SLA
For information on the SLA, see [SLA for Virtual Network NAT](https://azure.micr
## Next steps
-* Learn [how to get better outbound connectivity using an Azure NAT gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4)
-* Learn about [NAT gateway resource](./nat-gateway-resource.md)
-* Learn more about [NAT gateway metrics](./nat-metrics.md)
+* To create and validate a NAT gateway, see [Tutorial: Create a NAT gateway using the Azure portal](tutorial-create-nat-gateway-portal.md)
+
+* To view a video on more information about Azure Virtual Network NAT, see [How to get better outbound connectivity using an Azure NAT gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4)
+
+* Learn about the [NAT gateway resource](./nat-gateway-resource.md)
+
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
We recommend that you use the address ranges enumerated in [RFC 1918](https://to
* 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) * 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
+You can also deploy the Shared Address space reserved in [RFC 6598](https://datatracker.ietf.org/doc/html/rfc6598), which is treated as Private IP Address space in Azure:
+* 100.64.0.0 - 100.127.255.255 (100.64/10 prefix)
+ Other address spaces may work but may have undesirable side effects. In addition, you cannot add the following address ranges:
vpn-gateway Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/packet-capture.md
The following examples of JSON and a JSON schema provide explanations of each pr
- A maximum of five packet captures can be run in parallel per gateway. These packet captures can be a combination of gateway-wide packet captures and per-connection packet captures. - The unit for MaxPacketBufferSize is bytes and MaxFileSize is megabytes
+> [!NOTE]
+> Set the **CaptureSingleDirectionTrafficOnly** option to **false** if you want to capture both inner and outer packets.
+ ### Example JSON ```JSON-interactive {
The following examples of JSON and a JSON schema provide explanations of each pr
You can set up packet capture in the Azure portal by navigating to the VPN Gateway Packet Capture blade in the Azure portal and clicking the **Start Packet Capture button**
+> [!NOTE]
+> Do not select the **Capture Single Direction Traffic Only** option if you want to capture both inner and outer packets.
+ :::image type="content" source="./media/packet-capture/portal.jpg" alt-text="Screenshot of start packet capture in the portal." lightbox="./media/packet-capture/portal.jpg"::: ## Stop packet capture - portal
vpn-gateway Tutorial Create Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-create-gateway-portal.md
Previously updated : 07/21/2021 Last updated : 03/18/2022 #Customer intent: I want to create a VPN gateway for my virtual network so that I can connect to my VNet and communicate with resources remotely.
To see additional information about the public IP address object, click the name
## <a name="resize"></a>Resize a gateway SKU
-There are specific rules regarding resizing vs. changing a gateway SKU. In this section, we will resize the SKU. For more information, see [Gateway settings - resizing and changing SKUs](vpn-gateway-about-vpn-gateway-settings.md#resizechange).
+There are specific rules regarding resizing vs. changing a gateway SKU. In this section, we'll resize the SKU. For more information, see [Gateway settings - resizing and changing SKUs](vpn-gateway-about-vpn-gateway-settings.md#resizechange).
[!INCLUDE [resize a gateway](../../includes/vpn-gateway-resize-gw-portal-include.md)]
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md
Previously updated : 02/02/2022 Last updated : 03/18/2022
web-application-firewall Application Gateway Waf Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-metrics.md
+
+ Title: Monitoring metrics for Azure Application Gateway Web Application Firewall metrics
+description: This article describes the Azure Application Gateway WAF monitoring metrics.
+++++ Last updated : 03/15/2022+++
+# Azure Web Application Firewall Monitoring and Logging
+
+Azure Web Application Firewall (WAF) monitoring and logging are provided through logging and integration with Azure Monitor and Azure Monitor logs.
+
+## Azure Monitor
+
+WAF with Application Gateway log is integrated with [Azure Monitor](../../azure-monitor/overview.md). Azure Monitor allows you to track diagnostic information including WAF alerts and logs. You can configure WAF monitoring within the Application Gateway resource in the portal under the **Diagnostics** tab or through the Azure Monitor service directly.
+
+## Logs and diagnostics
+
+WAF with Application Gateway provides detailed reporting on each threat it detects. Logging is integrated with Azure Diagnostics logs and alerts are recorded in a json format. These logs can be integrated with [Azure Monitor logs](../../azure-monitor/insights/azure-networking-analytics.md).
+
+![WAFDiag](../media/waf-appgateway-metrics/waf-appgateway-diagnostic.png)
+
+For additional information on diagnostics log, visit [Application Gateway WAF resource logs](../ag/web-application-firewall-logs.md)
++
+## Application Gateway WAF V2 Metrics
+
+New WAF metrics are only available for Core Rule Set 3.2 or greater, or with bot protection and geo-filtering. The metrics can be further filtered on the supported dimensions.
+
+|**Metrics**|**Description**|**Dimension**|
+| :| :-| :--|
+|**WAF Total Requests**|Count of successful requests that WAF engine has served.| Action, Country/Region, Method, Mode|
+|**WAF Managed Rule Matches**|Count of total requests that a managed rule has matched.| Action, Country/Region, Mode, Rule Group, Rule Id |
+|**WAF Custom Rule Matches**|Count of total requests that match a specific custom rule. | Action, Country/Region, Mode, Rule Group, Rule Name|
+|**WAF Bot Protection Matches**|Count of total requests that have been blocked or logged from malicious IP addresses. The IP addresses are sourced from the Microsoft Threat Intelligence feed.| Action, Country/Region, Bot Type, Mode|
+
+For metrics supported by Application Gateway V2 SKU, see [Application Gateway v2 metrics](../../application-gateway/application-gateway-metrics.md#metrics-supported-by-application-gateway-v2-sku)
+
+## Application Gateway WAF V1 Metrics
+
+|**Metrics**|**Description**|**Dimension**|
+| :| :-| :--|
+|**Web Application Firewall Blocked Requests Count**|Count of total requests that have been blocked by the WAF engine||
+|**Web Application Firewall Blocked Requests Distribution**|Total number of rules hit distribution for the blocked requests by Rule Group and Rule ID|Rule Group, Rule ID|
+|**Web Application Firewall Total Rule Distribution**|Count of total matched requests distribution by Rule Group and Rule ID |Rule Group, Rule ID|
+
+For metrics supported by Application Gateway V1 SKU, see [Application Gateway v1 metrics](../../application-gateway/application-gateway-metrics.md#metrics-supported-by-application-gateway-v1-sku)
++
+ ## Access WAF Metrics in Azure portal
+
+1. From the Azure portal menu, select **All Resources** >> **\<your-Application-Gateway-profile>**.
+
+2. Under **Monitoring**, select **Metrics**:
+
+3. In **Metrics**, select the metric to add:
+
+
+4. Select Add filter to add a filter:
++
+ 5. Select New chart to add a new chart
+
+ ## Configure Alerts in Azure portal
+
+1. Set up alerts on Azure Application Gateway by selecting **Monitoring** >> **Alerts**.
+
+1. Select **New alert rule** for metrics listed in Metrics section.
+
+Alert will be charged based on Azure Monitor. For more information about alerts, see [Azure Monitor alerts](../../azure-monitor/alerts/alerts-overview.md).
+
+## Next steps
+
+- Learn about [Web Application Firewall](../overview.md).
+- Learn about [Web Application Firewall Logs](../ag/web-application-firewall-logs.md).